Systematic Reviews and Meta Analysis

  • Getting Started
  • Guides and Standards
  • Review Protocols
  • Databases and Sources
  • Randomized Controlled Trials
  • Controlled Clinical Trials
  • Observational Designs
  • Tests of Diagnostic Accuracy
  • Software and Tools
  • Where do I get all those articles?
  • Collaborations
  • EPI 233/528
  • Countway Mediated Search
  • Risk of Bias (RoB)

Systematic review Q & A

What is a systematic review.

A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies. A well-designed systematic review includes clear objectives, pre-selected criteria for identifying eligible studies, an explicit methodology, a thorough and reproducible search of the literature, an assessment of the validity or risk of bias of each included study, and a systematic synthesis, analysis and presentation of the findings of the included studies. A systematic review may include a meta-analysis.

For details about carrying out systematic reviews, see the Guides and Standards section of this guide.

Is my research topic appropriate for systematic review methods?

A systematic review is best deployed to test a specific hypothesis about a healthcare or public health intervention or exposure. By focusing on a single intervention or a few specific interventions for a particular condition, the investigator can ensure a manageable results set. Moreover, examining a single or small set of related interventions, exposures, or outcomes, will simplify the assessment of studies and the synthesis of the findings.

Systematic reviews are poor tools for hypothesis generation: for instance, to determine what interventions have been used to increase the awareness and acceptability of a vaccine or to investigate the ways that predictive analytics have been used in health care management. In the first case, we don't know what interventions to search for and so have to screen all the articles about awareness and acceptability. In the second, there is no agreed on set of methods that make up predictive analytics, and health care management is far too broad. The search will necessarily be incomplete, vague and very large all at the same time. In most cases, reviews without clearly and exactly specified populations, interventions, exposures, and outcomes will produce results sets that quickly outstrip the resources of a small team and offer no consistent way to assess and synthesize findings from the studies that are identified.

If not a systematic review, then what?

You might consider performing a scoping review . This framework allows iterative searching over a reduced number of data sources and no requirement to assess individual studies for risk of bias. The framework includes built-in mechanisms to adjust the analysis as the work progresses and more is learned about the topic. A scoping review won't help you limit the number of records you'll need to screen (broad questions lead to large results sets) but may give you means of dealing with a large set of results.

This tool can help you decide what kind of review is right for your question.

Can my student complete a systematic review during her summer project?

Probably not. Systematic reviews are a lot of work. Including creating the protocol, building and running a quality search, collecting all the papers, evaluating the studies that meet the inclusion criteria and extracting and analyzing the summary data, a well done review can require dozens to hundreds of hours of work that can span several months. Moreover, a systematic review requires subject expertise, statistical support and a librarian to help design and run the search. Be aware that librarians sometimes have queues for their search time. It may take several weeks to complete and run a search. Moreover, all guidelines for carrying out systematic reviews recommend that at least two subject experts screen the studies identified in the search. The first round of screening can consume 1 hour per screener for every 100-200 records. A systematic review is a labor-intensive team effort.

How can I know if my topic has been been reviewed already?

Before starting out on a systematic review, check to see if someone has done it already. In PubMed you can use the systematic review subset to limit to a broad group of papers that is enriched for systematic reviews. You can invoke the subset by selecting if from the Article Types filters to the left of your PubMed results, or you can append AND systematic[sb] to your search. For example:

"neoadjuvant chemotherapy" AND systematic[sb]

The systematic review subset is very noisy, however. To quickly focus on systematic reviews (knowing that you may be missing some), simply search for the word systematic in the title:

"neoadjuvant chemotherapy" AND systematic[ti]

Any PRISMA-compliant systematic review will be captured by this method since including the words "systematic review" in the title is a requirement of the PRISMA checklist. Cochrane systematic reviews do not include 'systematic' in the title, however. It's worth checking the Cochrane Database of Systematic Reviews independently.

You can also search for protocols that will indicate that another group has set out on a similar project. Many investigators will register their protocols in PROSPERO , a registry of review protocols. Other published protocols as well as Cochrane Review protocols appear in the Cochrane Methodology Register, a part of the Cochrane Library .

  • Next: Guides and Standards >>
  • Last Updated: Feb 26, 2024 3:17 PM
  • URL: https://guides.library.harvard.edu/meta-analysis

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

systematic review and meta analysis research design

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved April 9, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, what is your plagiarism score.

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • The PRISMA 2020...

The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews

  • Related content
  • Peer review
  • Matthew J Page , senior research fellow 1 ,
  • Joanne E McKenzie , associate professor 1 ,
  • Patrick M Bossuyt , professor 2 ,
  • Isabelle Boutron , professor 3 ,
  • Tammy C Hoffmann , professor 4 ,
  • Cynthia D Mulrow , professor 5 ,
  • Larissa Shamseer , doctoral student 6 ,
  • Jennifer M Tetzlaff , research product specialist 7 ,
  • Elie A Akl , professor 8 ,
  • Sue E Brennan , senior research fellow 1 ,
  • Roger Chou , professor 9 ,
  • Julie Glanville , associate director 10 ,
  • Jeremy M Grimshaw , professor 11 ,
  • Asbjørn Hróbjartsson , professor 12 ,
  • Manoj M Lalu , associate scientist and assistant professor 13 ,
  • Tianjing Li , associate professor 14 ,
  • Elizabeth W Loder , professor 15 ,
  • Evan Mayo-Wilson , associate professor 16 ,
  • Steve McDonald , senior research fellow 1 ,
  • Luke A McGuinness , research associate 17 ,
  • Lesley A Stewart , professor and director 18 ,
  • James Thomas , professor 19 ,
  • Andrea C Tricco , scientist and associate professor 20 ,
  • Vivian A Welch , associate professor 21 ,
  • Penny Whiting , associate professor 17 ,
  • David Moher , director and professor 22
  • 1 School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
  • 2 Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Amsterdam University Medical Centres, University of Amsterdam, Amsterdam, Netherlands
  • 3 Université de Paris, Centre of Epidemiology and Statistics (CRESS), Inserm, F 75004 Paris, France
  • 4 Institute for Evidence-Based Healthcare, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Australia
  • 5 University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA; Annals of Internal Medicine
  • 6 Knowledge Translation Program, Li Ka Shing Knowledge Institute, Toronto, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 7 Evidence Partners, Ottawa, Canada
  • 8 Clinical Research Institute, American University of Beirut, Beirut, Lebanon; Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
  • 9 Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon, USA
  • 10 York Health Economics Consortium (YHEC Ltd), University of York, York, UK
  • 11 Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada; Department of Medicine, University of Ottawa, Ottawa, Canada
  • 12 Centre for Evidence-Based Medicine Odense (CEBMO) and Cochrane Denmark, Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Open Patient data Exploratory Network (OPEN), Odense University Hospital, Odense, Denmark
  • 13 Department of Anesthesiology and Pain Medicine, The Ottawa Hospital, Ottawa, Canada; Clinical Epidemiology Program, Blueprint Translational Research Group, Ottawa Hospital Research Institute, Ottawa, Canada; Regenerative Medicine Program, Ottawa Hospital Research Institute, Ottawa, Canada
  • 14 Department of Ophthalmology, School of Medicine, University of Colorado Denver, Denver, Colorado, United States; Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
  • 15 Division of Headache, Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA; Head of Research, The BMJ , London, UK
  • 16 Department of Epidemiology and Biostatistics, Indiana University School of Public Health-Bloomington, Bloomington, Indiana, USA
  • 17 Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
  • 18 Centre for Reviews and Dissemination, University of York, York, UK
  • 19 EPPI-Centre, UCL Social Research Institute, University College London, London, UK
  • 20 Li Ka Shing Knowledge Institute of St. Michael's Hospital, Unity Health Toronto, Toronto, Canada; Epidemiology Division of the Dalla Lana School of Public Health and the Institute of Health Management, Policy, and Evaluation, University of Toronto, Toronto, Canada; Queen's Collaboration for Health Care Quality Joanna Briggs Institute Centre of Excellence, Queen's University, Kingston, Canada
  • 21 Methods Centre, Bruyère Research Institute, Ottawa, Ontario, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 22 Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • Correspondence to: M J Page matthew.page{at}monash.edu
  • Accepted 4 January 2021

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.

Systematic reviews serve many critical roles. They can provide syntheses of the state of knowledge in a field, from which future research priorities can be identified; they can address questions that otherwise could not be answered by individual studies; they can identify problems in primary research that should be rectified in future studies; and they can generate or evaluate theories about how or why phenomena occur. Systematic reviews therefore generate various types of knowledge for different users of reviews (such as patients, healthcare providers, researchers, and policy makers). 1 2 To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did (such as how studies were identified and selected) and what they found (such as characteristics of contributing studies and results of meta-analyses). Up-to-date reporting guidance facilitates authors achieving this. 3

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement published in 2009 (hereafter referred to as PRISMA 2009) 4 5 6 7 8 9 10 is a reporting guideline designed to address poor reporting of systematic reviews. 11 The PRISMA 2009 statement comprised a checklist of 27 items recommended for reporting in systematic reviews and an “explanation and elaboration” paper 12 13 14 15 16 providing additional reporting guidance for each item, along with exemplars of reporting. The recommendations have been widely endorsed and adopted, as evidenced by its co-publication in multiple journals, citation in over 60 000 reports (Scopus, August 2020), endorsement from almost 200 journals and systematic review organisations, and adoption in various disciplines. Evidence from observational studies suggests that use of the PRISMA 2009 statement is associated with more complete reporting of systematic reviews, 17 18 19 20 although more could be done to improve adherence to the guideline. 21

Many innovations in the conduct of systematic reviews have occurred since publication of the PRISMA 2009 statement. For example, technological advances have enabled the use of natural language processing and machine learning to identify relevant evidence, 22 23 24 methods have been proposed to synthesise and present findings when meta-analysis is not possible or appropriate, 25 26 27 and new methods have been developed to assess the risk of bias in results of included studies. 28 29 Evidence on sources of bias in systematic reviews has accrued, culminating in the development of new tools to appraise the conduct of systematic reviews. 30 31 Terminology used to describe particular review processes has also evolved, as in the shift from assessing “quality” to assessing “certainty” in the body of evidence. 32 In addition, the publishing landscape has transformed, with multiple avenues now available for registering and disseminating systematic review protocols, 33 34 disseminating reports of systematic reviews, and sharing data and materials, such as preprint servers and publicly accessible repositories. To capture these advances in the reporting of systematic reviews necessitated an update to the PRISMA 2009 statement.

Summary points

To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did, and what they found

The PRISMA 2020 statement provides updated reporting guidance for systematic reviews that reflects advances in methods to identify, select, appraise, and synthesise studies

The PRISMA 2020 statement consists of a 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and revised flow diagrams for original and updated reviews

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders

Development of PRISMA 2020

A complete description of the methods used to develop PRISMA 2020 is available elsewhere. 35 We identified PRISMA 2009 items that were often reported incompletely by examining the results of studies investigating the transparency of reporting of published reviews. 17 21 36 37 We identified possible modifications to the PRISMA 2009 statement by reviewing 60 documents providing reporting guidance for systematic reviews (including reporting guidelines, handbooks, tools, and meta-research studies). 38 These reviews of the literature were used to inform the content of a survey with suggested possible modifications to the 27 items in PRISMA 2009 and possible additional items. Respondents were asked whether they believed we should keep each PRISMA 2009 item as is, modify it, or remove it, and whether we should add each additional item. Systematic review methodologists and journal editors were invited to complete the online survey (110 of 220 invited responded). We discussed proposed content and wording of the PRISMA 2020 statement, as informed by the review and survey results, at a 21-member, two-day, in-person meeting in September 2018 in Edinburgh, Scotland. Throughout 2019 and 2020, we circulated an initial draft and five revisions of the checklist and explanation and elaboration paper to co-authors for feedback. In April 2020, we invited 22 systematic reviewers who had expressed interest in providing feedback on the PRISMA 2020 checklist to share their views (via an online survey) on the layout and terminology used in a preliminary version of the checklist. Feedback was received from 15 individuals and considered by the first author, and any revisions deemed necessary were incorporated before the final version was approved and endorsed by all co-authors.

The PRISMA 2020 statement

Scope of the guideline.

The PRISMA 2020 statement has been designed primarily for systematic reviews of studies that evaluate the effects of health interventions, irrespective of the design of the included studies. However, the checklist items are applicable to reports of systematic reviews evaluating other interventions (such as social or educational interventions), and many items are applicable to systematic reviews with objectives other than evaluating interventions (such as evaluating aetiology, prevalence, or prognosis). PRISMA 2020 is intended for use in systematic reviews that include synthesis (such as pairwise meta-analysis or other statistical synthesis methods) or do not include synthesis (for example, because only one eligible study is identified). The PRISMA 2020 items are relevant for mixed-methods systematic reviews (which include quantitative and qualitative studies), but reporting guidelines addressing the presentation and synthesis of qualitative data should also be consulted. 39 40 PRISMA 2020 can be used for original systematic reviews, updated systematic reviews, or continually updated (“living”) systematic reviews. However, for updated and living systematic reviews, there may be some additional considerations that need to be addressed. Where there is relevant content from other reporting guidelines, we reference these guidelines within the items in the explanation and elaboration paper 41 (such as PRISMA-Search 42 in items 6 and 7, Synthesis without meta-analysis (SWiM) reporting guideline 27 in item 13d). Box 1 includes a glossary of terms used throughout the PRISMA 2020 statement.

Glossary of terms

Systematic review —A review that uses explicit, systematic methods to collate and synthesise findings of studies that address a clearly formulated question 43

Statistical synthesis —The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates (described below) and other methods, such as combining P values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect (see McKenzie and Brennan 25 for a description of each method)

Meta-analysis of effect estimates —A statistical technique used to synthesise results when study effect estimates and their variances are available, yielding a quantitative summary of results 25

Outcome —An event or measurement collected for participants in a study (such as quality of life, mortality)

Result —The combination of a point estimate (such as a mean difference, risk ratio, or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome

Report —A document (paper or electronic) supplying information about a particular study. It could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information

Record —The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.

Study —An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses

PRISMA 2020 is not intended to guide systematic review conduct, for which comprehensive resources are available. 43 44 45 46 However, familiarity with PRISMA 2020 is useful when planning and conducting systematic reviews to ensure that all recommended information is captured. PRISMA 2020 should not be used to assess the conduct or methodological quality of systematic reviews; other tools exist for this purpose. 30 31 Furthermore, PRISMA 2020 is not intended to inform the reporting of systematic review protocols, for which a separate statement is available (PRISMA for Protocols (PRISMA-P) 2015 statement 47 48 ). Finally, extensions to the PRISMA 2009 statement have been developed to guide reporting of network meta-analyses, 49 meta-analyses of individual participant data, 50 systematic reviews of harms, 51 systematic reviews of diagnostic test accuracy studies, 52 and scoping reviews 53 ; for these types of reviews we recommend authors report their review in accordance with the recommendations in PRISMA 2020 along with the guidance specific to the extension.

How to use PRISMA 2020

The PRISMA 2020 statement (including the checklists, explanation and elaboration, and flow diagram) replaces the PRISMA 2009 statement, which should no longer be used. Box 2 summarises noteworthy changes from the PRISMA 2009 statement. The PRISMA 2020 checklist includes seven sections with 27 items, some of which include sub-items ( table 1 ). A checklist for journal and conference abstracts for systematic reviews is included in PRISMA 2020. This abstract checklist is an update of the 2013 PRISMA for Abstracts statement, 54 reflecting new and modified content in PRISMA 2020 ( table 2 ). A template PRISMA flow diagram is provided, which can be modified depending on whether the systematic review is original or updated ( fig 1 ).

Noteworthy changes to the PRISMA 2009 statement

Inclusion of the abstract reporting checklist within PRISMA 2020 (see item #2 and table 2 ).

Movement of the ‘Protocol and registration’ item from the start of the Methods section of the checklist to a new Other section, with addition of a sub-item recommending authors describe amendments to information provided at registration or in the protocol (see item #24a-24c).

Modification of the ‘Search’ item to recommend authors present full search strategies for all databases, registers and websites searched, not just at least one database (see item #7).

Modification of the ‘Study selection’ item in the Methods section to emphasise the reporting of how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process (see item #8).

Addition of a sub-item to the ‘Data items’ item recommending authors report how outcomes were defined, which results were sought, and methods for selecting a subset of results from included studies (see item #10a).

Splitting of the ‘Synthesis of results’ item in the Methods section into six sub-items recommending authors describe: the processes used to decide which studies were eligible for each synthesis; any methods required to prepare the data for synthesis; any methods used to tabulate or visually display results of individual studies and syntheses; any methods used to synthesise results; any methods used to explore possible causes of heterogeneity among study results (such as subgroup analysis, meta-regression); and any sensitivity analyses used to assess robustness of the synthesised results (see item #13a-13f).

Addition of a sub-item to the ‘Study selection’ item in the Results section recommending authors cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded (see item #16b).

Splitting of the ‘Synthesis of results’ item in the Results section into four sub-items recommending authors: briefly summarise the characteristics and risk of bias among studies contributing to the synthesis; present results of all statistical syntheses conducted; present results of any investigations of possible causes of heterogeneity among study results; and present results of any sensitivity analyses (see item #20a-20d).

Addition of new items recommending authors report methods for and results of an assessment of certainty (or confidence) in the body of evidence for an outcome (see items #15 and #22).

Addition of a new item recommending authors declare any competing interests (see item #26).

Addition of a new item recommending authors indicate whether data, analytic code and other materials used in the review are publicly available and if so, where they can be found (see item #27).

PRISMA 2020 item checklist

  • View inline

PRISMA 2020 for Abstracts checklist*

Fig 1

PRISMA 2020 flow diagram template for systematic reviews. The new design is adapted from flow diagrams proposed by Boers, 55 Mayo-Wilson et al. 56 and Stovold et al. 57 The boxes in grey should only be completed if applicable; otherwise they should be removed from the flow diagram. Note that a “report” could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report or any other document providing relevant information.

  • Download figure
  • Open in new tab
  • Download powerpoint

We recommend authors refer to PRISMA 2020 early in the writing process, because prospective consideration of the items may help to ensure that all the items are addressed. To help keep track of which items have been reported, the PRISMA statement website ( http://www.prisma-statement.org/ ) includes fillable templates of the checklists to download and complete (also available in the data supplement on bmj.com). We have also created a web application that allows users to complete the checklist via a user-friendly interface 58 (available at https://prisma.shinyapps.io/checklist/ and adapted from the Transparency Checklist app 59 ). The completed checklist can be exported to Word or PDF. Editable templates of the flow diagram can also be downloaded from the PRISMA statement website.

We have prepared an updated explanation and elaboration paper, in which we explain why reporting of each item is recommended and present bullet points that detail the reporting recommendations (which we refer to as elements). 41 The bullet-point structure is new to PRISMA 2020 and has been adopted to facilitate implementation of the guidance. 60 61 An expanded checklist, which comprises an abridged version of the elements presented in the explanation and elaboration paper, with references and some examples removed, is available in the data supplement on bmj.com. Consulting the explanation and elaboration paper is recommended if further clarity or information is required.

Journals and publishers might impose word and section limits, and limits on the number of tables and figures allowed in the main report. In such cases, if the relevant information for some items already appears in a publicly accessible review protocol, referring to the protocol may suffice. Alternatively, placing detailed descriptions of the methods used or additional results (such as for less critical outcomes) in supplementary files is recommended. Ideally, supplementary files should be deposited to a general-purpose or institutional open-access repository that provides free and permanent access to the material (such as Open Science Framework, Dryad, figshare). A reference or link to the additional information should be included in the main report. Finally, although PRISMA 2020 provides a template for where information might be located, the suggested location should not be seen as prescriptive; the guiding principle is to ensure the information is reported.

Use of PRISMA 2020 has the potential to benefit many stakeholders. Complete reporting allows readers to assess the appropriateness of the methods, and therefore the trustworthiness of the findings. Presenting and summarising characteristics of studies contributing to a synthesis allows healthcare providers and policy makers to evaluate the applicability of the findings to their setting. Describing the certainty in the body of evidence for an outcome and the implications of findings should help policy makers, managers, and other decision makers formulate appropriate recommendations for practice or policy. Complete reporting of all PRISMA 2020 items also facilitates replication and review updates, as well as inclusion of systematic reviews in overviews (of systematic reviews) and guidelines, so teams can leverage work that is already done and decrease research waste. 36 62 63

We updated the PRISMA 2009 statement by adapting the EQUATOR Network’s guidance for developing health research reporting guidelines. 64 We evaluated the reporting completeness of published systematic reviews, 17 21 36 37 reviewed the items included in other documents providing guidance for systematic reviews, 38 surveyed systematic review methodologists and journal editors for their views on how to revise the original PRISMA statement, 35 discussed the findings at an in-person meeting, and prepared this document through an iterative process. Our recommendations are informed by the reviews and survey conducted before the in-person meeting, theoretical considerations about which items facilitate replication and help users assess the risk of bias and applicability of systematic reviews, and co-authors’ experience with authoring and using systematic reviews.

Various strategies to increase the use of reporting guidelines and improve reporting have been proposed. They include educators introducing reporting guidelines into graduate curricula to promote good reporting habits of early career scientists 65 ; journal editors and regulators endorsing use of reporting guidelines 18 ; peer reviewers evaluating adherence to reporting guidelines 61 66 ; journals requiring authors to indicate where in their manuscript they have adhered to each reporting item 67 ; and authors using online writing tools that prompt complete reporting at the writing stage. 60 Multi-pronged interventions, where more than one of these strategies are combined, may be more effective (such as completion of checklists coupled with editorial checks). 68 However, of 31 interventions proposed to increase adherence to reporting guidelines, the effects of only 11 have been evaluated, mostly in observational studies at high risk of bias due to confounding. 69 It is therefore unclear which strategies should be used. Future research might explore barriers and facilitators to the use of PRISMA 2020 by authors, editors, and peer reviewers, designing interventions that address the identified barriers, and evaluating those interventions using randomised trials. To inform possible revisions to the guideline, it would also be valuable to conduct think-aloud studies 70 to understand how systematic reviewers interpret the items, and reliability studies to identify items where there is varied interpretation of the items.

We encourage readers to submit evidence that informs any of the recommendations in PRISMA 2020 (via the PRISMA statement website: http://www.prisma-statement.org/ ). To enhance accessibility of PRISMA 2020, several translations of the guideline are under way (see available translations at the PRISMA statement website). We encourage journal editors and publishers to raise awareness of PRISMA 2020 (for example, by referring to it in journal “Instructions to authors”), endorsing its use, advising editors and peer reviewers to evaluate submitted systematic reviews against the PRISMA 2020 checklists, and making changes to journal policies to accommodate the new reporting recommendations. We recommend existing PRISMA extensions 47 49 50 51 52 53 71 72 be updated to reflect PRISMA 2020 and advise developers of new PRISMA extensions to use PRISMA 2020 as the foundation document.

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders. Ultimately, we hope that uptake of the guideline will lead to more transparent, complete, and accurate reporting of systematic reviews, thus facilitating evidence based decision making.

Acknowledgments

We dedicate this paper to the late Douglas G Altman and Alessandro Liberati, whose contributions were fundamental to the development and implementation of the original PRISMA statement.

We thank the following contributors who completed the survey to inform discussions at the development meeting: Xavier Armoiry, Edoardo Aromataris, Ana Patricia Ayala, Ethan M Balk, Virginia Barbour, Elaine Beller, Jesse A Berlin, Lisa Bero, Zhao-Xiang Bian, Jean Joel Bigna, Ferrán Catalá-López, Anna Chaimani, Mike Clarke, Tammy Clifford, Ioana A Cristea, Miranda Cumpston, Sofia Dias, Corinna Dressler, Ivan D Florez, Joel J Gagnier, Chantelle Garritty, Long Ge, Davina Ghersi, Sean Grant, Gordon Guyatt, Neal R Haddaway, Julian PT Higgins, Sally Hopewell, Brian Hutton, Jamie J Kirkham, Jos Kleijnen, Julia Koricheva, Joey SW Kwong, Toby J Lasserson, Julia H Littell, Yoon K Loke, Malcolm R Macleod, Chris G Maher, Ana Marušic, Dimitris Mavridis, Jessie McGowan, Matthew DF McInnes, Philippa Middleton, Karel G Moons, Zachary Munn, Jane Noyes, Barbara Nußbaumer-Streit, Donald L Patrick, Tatiana Pereira-Cenci, Ba’ Pham, Bob Phillips, Dawid Pieper, Michelle Pollock, Daniel S Quintana, Drummond Rennie, Melissa L Rethlefsen, Hannah R Rothstein, Maroeska M Rovers, Rebecca Ryan, Georgia Salanti, Ian J Saldanha, Margaret Sampson, Nancy Santesso, Rafael Sarkis-Onofre, Jelena Savović, Christopher H Schmid, Kenneth F Schulz, Guido Schwarzer, Beverley J Shea, Paul G Shekelle, Farhad Shokraneh, Mark Simmonds, Nicole Skoetz, Sharon E Straus, Anneliese Synnot, Emily E Tanner-Smith, Brett D Thombs, Hilary Thomson, Alexander Tsertsvadze, Peter Tugwell, Tari Turner, Lesley Uttley, Jeffrey C Valentine, Matt Vassar, Areti Angeliki Veroniki, Meera Viswanathan, Cole Wayant, Paul Whaley, and Kehu Yang. We thank the following contributors who provided feedback on a preliminary version of the PRISMA 2020 checklist: Jo Abbott, Fionn Büttner, Patricia Correia-Santos, Victoria Freeman, Emily A Hennessy, Rakibul Islam, Amalia (Emily) Karahalios, Kasper Krommes, Andreas Lundh, Dafne Port Nascimento, Davina Robson, Catherine Schenck-Yglesias, Mary M Scott, Sarah Tanveer and Pavel Zhelnov. We thank Abigail H Goben, Melissa L Rethlefsen, Tanja Rombey, Anna Scott, and Farhad Shokraneh for their helpful comments on the preprints of the PRISMA 2020 papers. We thank Edoardo Aromataris, Stephanie Chang, Toby Lasserson and David Schriger for their helpful peer review comments on the PRISMA 2020 papers.

Contributors: JEM and DM are joint senior authors. MJP, JEM, PMB, IB, TCH, CDM, LS, and DM conceived this paper and designed the literature review and survey conducted to inform the guideline content. MJP conducted the literature review, administered the survey and analysed the data for both. MJP prepared all materials for the development meeting. MJP and JEM presented proposals at the development meeting. All authors except for TCH, JMT, EAA, SEB, and LAM attended the development meeting. MJP and JEM took and consolidated notes from the development meeting. MJP and JEM led the drafting and editing of the article. JEM, PMB, IB, TCH, LS, JMT, EAA, SEB, RC, JG, AH, TL, EMW, SM, LAM, LAS, JT, ACT, PW, and DM drafted particular sections of the article. All authors were involved in revising the article critically for important intellectual content. All authors approved the final version of the article. MJP is the guarantor of this work. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Funding: There was no direct funding for this research. MJP is supported by an Australian Research Council Discovery Early Career Researcher Award (DE200101618) and was previously supported by an Australian National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088535) during the conduct of this research. JEM is supported by an Australian NHMRC Career Development Fellowship (1143429). TCH is supported by an Australian NHMRC Senior Research Fellowship (1154607). JMT is supported by Evidence Partners Inc. JMG is supported by a Tier 1 Canada Research Chair in Health Knowledge Transfer and Uptake. MML is supported by The Ottawa Hospital Anaesthesia Alternate Funds Association and a Faculty of Medicine Junior Research Chair. TL is supported by funding from the National Eye Institute (UG1EY020522), National Institutes of Health, United States. LAM is supported by a National Institute for Health Research Doctoral Research Fellowship (DRF-2018-11-ST2-048). ACT is supported by a Tier 2 Canada Research Chair in Knowledge Synthesis. DM is supported in part by a University Research Chair, University of Ottawa. The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication.

Competing interests: All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/conflicts-of-interest/ and declare: EL is head of research for the BMJ ; MJP is an editorial board member for PLOS Medicine ; ACT is an associate editor and MJP, TL, EMW, and DM are editorial board members for the Journal of Clinical Epidemiology ; DM and LAS were editors in chief, LS, JMT, and ACT are associate editors, and JG is an editorial board member for Systematic Reviews . None of these authors were involved in the peer review process or decision to publish. TCH has received personal fees from Elsevier outside the submitted work. EMW has received personal fees from the American Journal for Public Health , for which he is the editor for systematic reviews. VW is editor in chief of the Campbell Collaboration, which produces systematic reviews, and co-convenor of the Campbell and Cochrane equity methods group. DM is chair of the EQUATOR Network, IB is adjunct director of the French EQUATOR Centre and TCH is co-director of the Australasian EQUATOR Centre, which advocates for the use of reporting guidelines to improve the quality of reporting in research articles. JMT received salary from Evidence Partners, creator of DistillerSR software for systematic reviews; Evidence Partners was not involved in the design or outcomes of the statement, and the views expressed solely represent those of the author.

Provenance and peer review: Not commissioned; externally peer reviewed.

Patient and public involvement: Patients and the public were not involved in this methodological research. We plan to disseminate the research widely, including to community participants in evidence synthesis organisations.

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/ .

  • Gurevitch J ,
  • Koricheva J ,
  • Nakagawa S ,
  • Liberati A ,
  • Tetzlaff J ,
  • Altman DG ,
  • PRISMA Group
  • Tricco AC ,
  • Sampson M ,
  • Shamseer L ,
  • Leoncini E ,
  • de Belvis G ,
  • Ricciardi W ,
  • Fowler AJ ,
  • Leclercq V ,
  • Beaudart C ,
  • Ajamieh S ,
  • Rabenda V ,
  • Tirelli E ,
  • O’Mara-Eves A ,
  • McNaught J ,
  • Ananiadou S
  • Marshall IJ ,
  • Noel-Storr A ,
  • Higgins JPT ,
  • Chandler J ,
  • McKenzie JE ,
  • López-López JA ,
  • Becker BJ ,
  • Campbell M ,
  • Sterne JAC ,
  • Savović J ,
  • Sterne JA ,
  • Hernán MA ,
  • Reeves BC ,
  • Whiting P ,
  • Higgins JP ,
  • ROBIS group
  • Hultcrantz M ,
  • Stewart L ,
  • Bossuyt PM ,
  • Flemming K ,
  • McInnes E ,
  • France EF ,
  • Cunningham M ,
  • Rethlefsen ML ,
  • Kirtley S ,
  • Waffenschmidt S ,
  • PRISMA-S Group
  • ↵ Higgins JPT, Thomas J, Chandler J, et al, eds. Cochrane Handbook for Systematic Reviews of Interventions : Version 6.0. Cochrane, 2019. Available from https://training.cochrane.org/handbook .
  • Dekkers OM ,
  • Vandenbroucke JP ,
  • Cevallos M ,
  • Renehan AG ,
  • ↵ Cooper H, Hedges LV, Valentine JV, eds. The Handbook of Research Synthesis and Meta-Analysis. Russell Sage Foundation, 2019.
  • IOM (Institute of Medicine)
  • PRISMA-P Group
  • Salanti G ,
  • Caldwell DM ,
  • Stewart LA ,
  • PRISMA-IPD Development Group
  • Zorzela L ,
  • Ioannidis JP ,
  • PRISMAHarms Group
  • McInnes MDF ,
  • Thombs BD ,
  • and the PRISMA-DTA Group
  • Beller EM ,
  • Glasziou PP ,
  • PRISMA for Abstracts Group
  • Mayo-Wilson E ,
  • Dickersin K ,
  • MUDS investigators
  • Stovold E ,
  • Beecher D ,
  • Noel-Storr A
  • McGuinness LA
  • Sarafoglou A ,
  • Boutron I ,
  • Giraudeau B ,
  • Porcher R ,
  • Chauvin A ,
  • Schulz KF ,
  • Schroter S ,
  • Stevens A ,
  • Weinstein E ,
  • Macleod MR ,
  • IICARus Collaboration
  • Kirkham JJ ,
  • Petticrew M ,
  • Tugwell P ,
  • PRISMA-Equity Bellagio group

systematic review and meta analysis research design

  • En español – ExME
  • Em português – EME

Systematic reviews vs meta-analysis: what’s the difference?

Posted on 24th July 2023 by Verónica Tanco Tellechea

""

You may hear the terms ‘systematic review’ and ‘meta-analysis being used interchangeably’. Although they are related, they are distinctly different. Learn more in this blog for beginners.

What is a systematic review?

According to Cochrane (1), a systematic review attempts to identify, appraise and synthesize all the empirical evidence to answer a specific research question. Thus, a systematic review is where you might find the most relevant, adequate, and current information regarding a specific topic. In the levels of evidence pyramid , systematic reviews are only surpassed by meta-analyses. 

To conduct a systematic review, you will need, among other things: 

  • A specific research question, usually in the form of a PICO question.
  • Pre-specified eligibility criteria, to decide which articles will be included or discarded from the review. 
  • To follow a systematic method that will minimize bias.

You can find protocols that will guide you from both Cochrane and the Equator Network , among other places, and if you are a beginner to the topic then have a read of an overview about systematic reviews.

What is a meta-analysis?

A meta-analysis is a quantitative, epidemiological study design used to systematically assess the results of previous research (2) . Usually, they are based on randomized controlled trials, though not always. This means that a meta-analysis is a mathematical tool that allows researchers to mathematically combine outcomes from multiple studies.

When can a meta-analysis be implemented?

There is always the possibility of conducting a meta-analysis, yet, for it to throw the best possible results it should be performed when the studies included in the systematic review are of good quality, similar designs, and have similar outcome measures.

Why are meta-analyses important?

Outcomes from a meta-analysis may provide more precise information regarding the estimate of the effect of what is being studied because it merges outcomes from multiple studies. In a meta-analysis, data from various trials are combined and generate an average result (1), which is portrayed in a forest plot diagram. Moreover, meta-analysis also include a funnel plot diagram to visually detect publication bias.

Conclusions

A systematic review is an article that synthesizes available evidence on a certain topic utilizing a specific research question, pre-specified eligibility criteria for including articles, and a systematic method for its production. Whereas a meta-analysis is a quantitative, epidemiological study design used to assess the results of articles included in a systematic-review. 

Remember: All meta-analyses involve a systematic review, but not all systematic reviews involve a meta-analysis.

If you would like some further reading on this topic, we suggest the following:

The systematic review – a S4BE blog article

Meta-analysis: what, why, and how – a S4BE blog article

The difference between a systematic review and a meta-analysis – a blog article via Covidence

Systematic review vs meta-analysis: what’s the difference? A 5-minute video from Research Masterminds:

  • About Cochrane reviews [Internet]. Cochranelibrary.com. [cited 2023 Apr 30]. Available from: https://www.cochranelibrary.com/about/about-cochrane-reviews
  • Haidich AB. Meta-analysis in medical research. Hippokratia. 2010;14(Suppl 1):29–37.

' src=

Verónica Tanco Tellechea

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

systematic review and meta analysis research design

How to read a funnel plot

This blog introduces you to funnel plots, guiding you through how to read them and what may cause them to look asymmetrical.

""

Heterogeneity in meta-analysis

When you bring studies together in a meta-analysis, one of the things you need to consider is the variability in your studies – this is called heterogeneity. This blog presents the three types of heterogeneity, considers the different types of outcome data, and delves a little more into dealing with the variations.

""

Natural killer cells in glioblastoma therapy

As seen in a previous blog from Davide, modern neuroscience often interfaces with other medical specialities. In this blog, he provides a summary of new evidence about the potential of a therapeutic strategy born at the crossroad between neurology, immunology and oncology.

Study Design 101: Meta-Analysis

  • Case Report
  • Case Control Study
  • Cohort Study
  • Randomized Controlled Trial
  • Practice Guideline
  • Systematic Review

Meta-Analysis

  • Helpful Formulas
  • Finding Specific Study Types

A subset of systematic reviews; a method for systematically combining pertinent qualitative and quantitative study data from several selected studies to develop a single conclusion that has greater statistical power. This conclusion is statistically stronger than the analysis of any single study, due to increased numbers of subjects, greater diversity among subjects, or accumulated effects and results.

Meta-analysis would be used for the following purposes:

  • To establish statistical significance with studies that have conflicting results
  • To develop a more correct estimate of effect magnitude
  • To provide a more complex analysis of harms, safety data, and benefits
  • To examine subgroups with individual numbers that are not statistically significant

If the individual studies utilized randomized controlled trials (RCT), combining several selected RCT results would be the highest-level of evidence on the evidence hierarchy, followed by systematic reviews, which analyze all available studies on a topic.

  • Greater statistical power
  • Confirmatory data analysis
  • Greater ability to extrapolate to general population affected
  • Considered an evidence-based resource

Disadvantages

  • Difficult and time consuming to identify appropriate studies
  • Not all studies provide adequate data for inclusion and analysis
  • Requires advanced statistical techniques
  • Heterogeneity of study populations

Design pitfalls to look out for

The studies pooled for review should be similar in type (i.e. all randomized controlled trials).

Are the studies being reviewed all the same type of study or are they a mixture of different types?

The analysis should include published and unpublished results to avoid publication bias.

Does the meta-analysis include any appropriate relevant studies that may have had negative outcomes?

Fictitious Example

Do individuals who wear sunscreen have fewer cases of melanoma than those who do not wear sunscreen? A MEDLINE search was conducted using the terms melanoma, sunscreening agents, and zinc oxide, resulting in 8 randomized controlled studies, each with between 100 and 120 subjects. All of the studies showed a positive effect between wearing sunscreen and reducing the likelihood of melanoma. The subjects from all eight studies (total: 860 subjects) were pooled and statistically analyzed to determine the effect of the relationship between wearing sunscreen and melanoma. This meta-analysis showed a 50% reduction in melanoma diagnosis among sunscreen-wearers.

Real-life Examples

Goyal, A., Elminawy, M., Kerezoudis, P., Lu, V., Yolcu, Y., Alvi, M., & Bydon, M. (2019). Impact of obesity on outcomes following lumbar spine surgery: A systematic review and meta-analysis. Clinical Neurology and Neurosurgery, 177 , 27-36. https://doi.org/10.1016/j.clineuro.2018.12.012

This meta-analysis was interested in determining whether obesity affects the outcome of spinal surgery. Some previous studies have shown higher perioperative morbidity in patients with obesity while other studies have not shown this effect. This study looked at surgical outcomes including "blood loss, operative time, length of stay, complication and reoperation rates and functional outcomes" between patients with and without obesity. A meta-analysis of 32 studies (23,415 patients) was conducted. There were no significant differences for patients undergoing minimally invasive surgery, but patients with obesity who had open surgery had experienced higher blood loss and longer operative times (not clinically meaningful) as well as higher complication and reoperation rates. Further research is needed to explore this issue in patients with morbid obesity.

Nakamura, A., van Der Waerden, J., Melchior, M., Bolze, C., El-Khoury, F., & Pryor, L. (2019). Physical activity during pregnancy and postpartum depression: Systematic review and meta-analysis. Journal of Affective Disorders, 246 , 29-41. https://doi.org/10.1016/j.jad.2018.12.009

This meta-analysis explored whether physical activity during pregnancy prevents postpartum depression. Seventeen studies were included (93,676 women) and analysis showed a "significant reduction in postpartum depression scores in women who were physically active during their pregnancies when compared with inactive women." Possible limitations or moderators of this effect include intensity and frequency of physical activity, type of physical activity, and timepoint in pregnancy (e.g. trimester).

Related Terms

A document often written by a panel that provides a comprehensive review of all relevant studies on a particular clinical or health-related topic/question.

Publication Bias

A phenomenon in which studies with positive results have a better chance of being published, are published earlier, and are published in journals with higher impact factors. Therefore, conclusions based exclusively on published studies can be misleading.

Now test yourself!

1. A Meta-Analysis pools together the sample populations from different studies, such as Randomized Controlled Trials, into one statistical analysis and treats them as one large sample population with one conclusion.

a) True b) False

2. One potential design pitfall of Meta-Analyses that is important to pay attention to is:

a) Whether it is evidence-based. b) If the authors combined studies with conflicting results. c) If the authors appropriately combined studies so they did not compare apples and oranges. d) If the authors used only quantitative data.

Evidence Pyramid - Navigation

  • Meta- Analysis
  • Case Reports
  • << Previous: Systematic Review
  • Next: Helpful Formulas >>

Creative Commons License

  • Last Updated: Sep 25, 2023 10:59 AM
  • URL: https://guides.himmelfarb.gwu.edu/studydesign101

GW logo

  • Himmelfarb Intranet
  • Privacy Notice
  • Terms of Use
  • GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form .
  • Himmelfarb Health Sciences Library
  • 2300 Eye St., NW, Washington, DC 20037
  • Phone: (202) 994-2850
  • [email protected]
  • https://himmelfarb.gwu.edu

Book cover

Orthopaedic Sports Medicine pp 1–11 Cite as

Systematic Review and Meta-Analysis

  • Yousif Eliya 5 ,
  • Alexander Zakharia 6 ,
  • Aaron Gazendam 7 &
  • Darren de SA 8  
  • Living reference work entry
  • First Online: 30 September 2023

35 Accesses

Within the paradigm of evidence-based orthopedics, systematic reviews and meta-analyses top the hierarchy of evidence. A systematic review summarizes available literature of a specific research question, and meta-analysis applies statistical methods to combine results from two or more studies. Systematic reviews are increasingly published in orthopedic surgery, many answering the same clinical questions with different conclusions. A well-performed systematic review includes a clinical question that is comprehensively searched over multiple databases by at least two reviewers. Findings from systematic reviews should include outcomes most meaningful to patients and discuss results based on clinical and statistical significance. This chapter highlights characteristics of a well-conduced systematic review and meta-analysis and offers nine tips on the best methods to design, synthesize, and appraise this research methodology.

  • Systematic Reviews
  • Meta-analysis
  • Research Methodology
  • Evidence-Based Orthopedics
  • Evidence-Based Medicine

This is a preview of subscription content, log in via an institution .

Guyatt G, Cairns J, Churchill D, Cook D, Haynes B, Hirsh J, et al. Evidence-based medicine: a new approach to teaching the practice of medicine. JAMA. 1992;268(17):2420–5.

Article   Google Scholar  

Cvetanovich GL, Fillingham YA, Harris JD, Erickson BJ, Verma NN, Bach BR Jr. Publication and level of evidence trends in the American journal of sports medicine from 1996 to 2011. Am J Sports Med. 2015;43(1):220–5.

Article   PubMed   Google Scholar  

Murad MH, Montori VM, Ioannidis JP, Jaeschke R, Devereaux PJ, Prasad K, Neumann I, Carrasco-Labra A, Agoritsas T, Hatala R, Meade MO. How to read a systematic review and meta-analysis and apply the results to patient care: users’ guides to the medical literature. JAMA. 2014;312(2):171–9.

Article   CAS   PubMed   Google Scholar  

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.

Article   PubMed   PubMed Central   Google Scholar  

Hall MJ, Schwartzman A, Zhang J, Liu X. Ambulatory surgery data from hospitals and ambulatory surgery centers: United States, 2010. Natl Health Stat Rep. 2017;102:1–5.

Google Scholar  

Khan M, Evaniew N, Bedi A, Ayeni OR, Bhandari M. Arthroscopic surgery for degenerative tears of the meniscus: a systematic review and meta-analysis. Can Med Assoc J. 2014;186(14):1057–64.

Siemieniuk RA, Harris IA, Agoritsas T, Poolman RW, Brignardello-Petersen R, Van de Velde S, Buchbinder R, Englund M, Lytvyn L, Quinlan C, Helsingen L. Arthroscopic surgery for degenerative knee arthritis and meniscal tears: a clinical practice guideline. BMJ. 2017;357:j1982.

Brignardello-Petersen R, Guyatt GH, Buchbinder R, Poolman RW, Schandelmaier S, Chang Y, et al. Knee arthroscopy versus conservative management in patients with degenerative knee disease: a systematic review. BMJ Open. 2017;7(5):e016114.

Essilfie A, Kang HP, Mayer EN, Trasolini NA, Alluri RK, Weber AE. Are orthopaedic surgeons performing fewer arthroscopic partial meniscectomies in patients greater than 50 years old? A national database study. Arthroscopy. 2019;35(4):1152–9.

Kawata M, Sasabuchi Y, Taketomi S, Inui H, Matsui H, Fushimi K, et al. Annual trends in arthroscopic meniscus surgery: analysis of a national database in Japan. PLoS One. 2018;13(4):e0194854.

Parker BR, Hurwitz S, Spang J, Creighton R, Kamath G. Surgical trends in the treatment of meniscal tears: analysis of data from the American Board of Orthopaedic Surgery Certification Examination Database. Am J Sports Med. 2016;44(7):1717–23.

Siontis KC, Hernandez-Boussard T, Ioannidis JPA. Overlapping meta-analyses on the same topic: survey of published studies. BMJ. 2013;347:7921.

Ioannidis JP. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016;94(3):485–514.

Poolman RW, Abouali JA, Conter HJ, Bhandari M. Overlapping systematic reviews of anterior cruciate ligament reconstruction comparing hamstring autograft with bone-patellar tendon-bone autograft: why are they different? J Bone Joint Surg Am. 2007;89(7):1542–52.

PubMed   Google Scholar  

Dijkman BG, Abouali JAK, Kooistra BW, Conter HJ, Poolman RW, Kulkarni AV, et al. Twenty years of meta-analyses in orthopaedic surgery: has quality kept up with quantity? J Bone Joint Surg Am. 2010;92(1):48–57.

Raich AL, Skelly AC. Asking the right question: specifying your study question. Evid Based Spine Care J. 2013;4(2):68.

Simunovic N, Devereaux PJ, Sprague S, Guyatt GH, Schemitsch E, DeBeer J, Bhandari M. Effect of early surgery after hip fracture on mortality and complications: systematic review and meta-analysis. Can Med Assoc J. 2010;182(15):1609–16.

Sheth U, Simunovic N, Klein G, Fu F, Einhorn TA, Schemitsch E, Ayeni OR, Bhandari M. Efficacy of autologous platelet-rich plasma use for orthopaedic indications: a meta-analysis. J Bone Joint Surg Am. 2012;94(4):298–307.

Eliya Y, Nawar K, Rothrauff BB, Lesniak BP, Musahl V, de Darren SA. Anatomical anterior cruciate ligament reconstruction (ACLR) results in fewer rates of atraumatic graft rupture, and higher rates of rotatory knee stability: a meta-analysis. J ISAKOS. 2020;5(6):359–70.

van Eck CF, Gravare-Silbernagel K, Samuelsson K, Musahl V, van Dijk CN, Karlsson J, Irrgang JJ, Fu FH. Evidence to support the interpretation and use of the anatomic anterior cruciate ligament reconstruction checklist. J Bone Joint Surg Am. 2013;95(20):e153.

Lameire DL, Abdel Khalik H, Zakharia A, Kay J, Almasri M, de Darren SA. Bone grafting the patellar defect after bone–patellar tendon–bone anterior cruciate ligament reconstruction decreases anterior knee morbidity: a systematic review. Arthroscopy. 2021;37(7):2361–2376.e1.

Ishøi L, Krommes K, Husted RS, Juhl CB, Thorborg K. Diagnosis, prevention and treatment of common lower extremity muscle injuries in sport – grading the evidence: a statement paper commissioned by the Danish Society of Sports Physical Therapy (DSSF). Br J Sports Med. 2020;54(9):528–37.

Rathbone J, Carter M, Hoffmann T, Glasziou P. A comparison of the performance of seven key bibliographic databases in identifying all relevant systematic reviews of interventions for hypertension. Syst Rev. 2016;5(1):27.

The National Library of Medicine. Electronic databases & directories: alphabetical list. https://www.nlm.nih.gov/services/databases_abc.html . Accessed Oct 2021.

Kellermeyer L, Harnke B, Knight S. Covidence and Rayyan. J Med Libr Assoc. 2018;106(4):580–3.

Article   PubMed Central   Google Scholar  

Mahood Q, Van Eerd D, Irvin E. Searching for grey literature for systematic reviews: challenges and benefits. Res Synth Methods. 2014;5(3):221–34.

Conn VS, Valentine JC, Cooper HM, Rantz MJ. Grey literature in meta-analyses. Nurs Res. 2003;52(4):256–61.

Cote MP, Lubowitz JH, Rossi MJ, Brand JC. Reviews pooling heterogeneous, low-evidence, high-bias data result in incorrect conclusions: but heterogeneity is an opportunity to explore. Arthroscopy. 2018;34(12):3126–8.

Harris JD, Brand JC, Cote MP, Faucett SC, Dhawan A. Research pearls: the significance of statistics and perils of pooling. Part 1: clinical versus statistical significance. Arthroscopy. 2017;33(6):1102–12.

Harris JD, Brand JC, Cote MP, Dhawan A. Research pearls: the significance of statistics and perils of pooling. Part 3: pearls and pitfalls of meta-analyses and systematic reviews. Arthroscopy. 2017;33(8):1594–602.

Morrison L, Haldane C, de Darren SA, Findakli F, Simunovic N, Ayeni OR. Device-assisted tensioning is associated with lower rates of graft failure when compared to manual tensioning in ACL reconstruction. Knee Surg Sport Traumatol Arthrosc. 2018;26(12):3690–8.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Moher D, Liberati A, Tetzlaff J, Altman DG, Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

McGinn T, Wyer PC, Newman TB, Keitz S, Leipzig R. Tips for learners of evidence-based medicine: 3. Measures of observer variability (kappa statistic). Can Med Assoc J. 2004;171(11):1369–73.

Liljequist D, Elfving B, Skavberg RK. Intraclass correlation – a discussion and demonstration of basic features. PLoS One. 2019;14(7):e0219854.

Article   CAS   PubMed   PubMed Central   Google Scholar  

McHugh ML. Interrater reliability: the kappa statistic. Biochem Med. 2012;22(3):276–82.

Whiting P, Wolff R, Mallett S, Simera I, Savović J. A proposed framework for developing quality assessment tools. Syst Rev. 2017;6(1):204.

RoB 2: a revised Cochrane risk-of-bias tool for randomized trials | Cochrane Bias. https://methods.cochrane.org/bias/resources/rob-2-revised-cochrane-risk-bias-tool-randomized-trials . Accessed Oct 2021.

ROBINS-I: cochrane bias. https://methods.cochrane.org/bias/risk-bias-non-randomized-studies-interventions . Accessed Oct 2021.

Guyatt GH, Oxman AD, Kunz R, Vist GE, Falck-Ytter Y, Schünemann HJ. What is “quality of evidence” and why is it important to clinicians? BMJ. 2008;336(7651):995–8.

Nikolakopoulou A, Higgins JPT, Papakonstantinou T, Chaimani A, Del Giovane C, Egger M, et al. CINeMA: an approach for assessing confidence in the results of a network meta-analysis. PLoS Med. 2020;17(4):e1003082.

Schünemann HJ, Higgins JP, Vist GE, Glasziou P, Akl EA, Skoetz N, et al. Chapter 14: Completing ‘summary of findings’ tables and grading the certainty of the evidence. In: Cochrane handbook for systematic reviews of interventions. 2nd ed. Hoboken: Wiley; 2019. p. 375–402. https://doi.org/10.1002/9781119536604.ch14 .

Chapter   Google Scholar  

Bailar JC. The promise and problems of meta-analysis. N Engl J Med. 1997;337(8):559–61.

Deeks JJ, Higgins JPT, Altman DG. Chapter 10: Analysing data and undertaking meta-analyses. In: Cochrane handbook for systematic reviews of interventions. 2nd ed. Hoboken: Wiley; 2019. p. 241–84. https://doi.org/10.1002/9781119536604.ch10 .

Higgins JPT, Li T, Deeks JJ. Chapter 6: Choosing effect measures and computing estimates of effect. In: Cochrane handbook for systematic reviews of interventions. 2nd ed. Hoboken: Wiley; 2019. p. 143–76. https://doi.org/10.1002/9781119536604.ch6 .

Higgins JPT, Green S. The standardized mean difference. In: Cochrane handbook for systematic reviews of interventions, version 5.1.0. The Cochrane Collaboration; 2011. p. 9.2.3.2. https://shorturl.at/uLORY

Ranganathan P, Pramesh CS, Buyse M. Common pitfalls in statistical analysis: clinical versus statistical significance. Perspect Clin Res. 2015;6(3):169.

Fletcher J. What is heterogeneity and is it important? BMJ. 2007;334:94.

Hemming K, Hughes JP, McKenzie JE, Forbes AB. Extending the I-squared statistic to describe treatment effect heterogeneity in cluster, multi-centre randomized trials and individual patient data meta-analysis. Stat Methods Med Res. 2021;30(2):376–95.

Higgins JPT, Green S. Identifying and measuring heterogeneity. In: Chochrane handbook for systematic reviews of interventions, version 5.1.0. The Cochrane Collaboration; 2011. p. 9.5.2. https://shorturl.at/louJ3

Sun X, Ioannidis JPA, Agoritsas T, Alba AC, Guyatt G. How to use a subgroup analysis. JAMA. 2014;311(4):405–11.

Patsopoulos NA, Evangelou E, Ioannidis JP. Sensitivity of between-study heterogeneity in meta-analysis: proposed metrics and empirical evaluation. Int J Epidemiol. 2008;37(5):1148–57.

Lin L, Chu H. Quantifying publication bias in meta-analysis. Biometrics. 2018;74(3):785–94.

Gagnier JJ, Kellam PJ. Reporting and methodological quality of systematic reviews in the orthopaedic literature. J Bone Jt Surg. 2013;95(11):377.

Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson D, Rennie D, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. JAMA. 2000;283(15):2008–12.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017;358:j4008.

Ardern CL, Büttner F, Andrade R, Weir A, Ashe MC, Holden S, et al. Implementing the 27 PRISMA 2020 statement items for systematic reviews in the sport and exercise medicine, musculoskeletal rehabilitation and sports science fields: the PERSiST (implementing Prisma in Exercise, Rehabilitation, Sport medicine and SporTs science) guidance. Br J Sports Med. 2022;56:175–95.

Download references

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Yousif Eliya

MacSports Research Program, McMaster University, Hamilton, ON, Canada

Alexander Zakharia

Department of Surgery, Division of Orthopaedic Surgery, Center for Evidence Based Orthopaedics, McMaster University, Hamilton, ON, Canada

Aaron Gazendam

Department of Surgery, Division of Pediatric Orthopaedic Surgery, McMaster University, Hamilton, ON, Canada

Darren de SA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Darren de SA .

Editor information

Editors and affiliations.

FIFA Medical Centre of Excellence, Clínica Espregueira, Porto, Portugal

João Espregueira-Mendes

Dept. Orthopaedics, Sahlgrenska University Hospital, Mölndal, Sweden

Jón Karlsson

UPMC Freddie Fu SportsMed Ctr, Orth Surg, University of Pittsburgh, Pittsburgh, PA, USA

Volker Musahl

McMaster University, Hamilton, ON, Canada

Olufemi R. Ayeni

Section Editor information

UPMC Freddie Fu Sports Medicine Center, Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, PA, USA

Volker Musahl MD

Division of Orthopaedic Surgery, McMaster University, Hamilton, ON, Canada

Mohit Bhandari

Mark R. Neaman Family Chair of Orthopaedic Surgery and Director, Orthopaedic & Spine Institute, NorthShore University HealthSystem, Skokie IL, Skokie, IL, USA

Jason L. Koh

Department of Orthopaedic Surgery, Centre Hospitalier de Luxembourg – Clinique d’Eich, Luxembourg, Grand Duchy of Luxembourg

Caroline Mouton

Sports Medicine and Science, Luxembourg Institute of Research in Orthopaedics, Luxembourg city, Luxembourg

Rights and permissions

Reprints and permissions

Copyright information

© 2024 Crown

About this entry

Cite this entry.

Eliya, Y., Zakharia, A., Gazendam, A., de SA, D. (2024). Systematic Review and Meta-Analysis. In: Espregueira-Mendes, J., Karlsson, J., Musahl, V., Ayeni, O.R. (eds) Orthopaedic Sports Medicine. Springer, Cham. https://doi.org/10.1007/978-3-030-65430-6_80-1

Download citation

DOI : https://doi.org/10.1007/978-3-030-65430-6_80-1

Received : 03 September 2023

Accepted : 04 September 2023

Published : 30 September 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-65430-6

Online ISBN : 978-3-030-65430-6

eBook Packages : Springer Reference Medicine Reference Module Medicine

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open access
  • Published: 01 August 2019

A step by step guide for conducting a systematic review and meta-analysis with simulation data

  • Gehad Mohamed Tawfik 1 , 2 ,
  • Kadek Agus Surya Dila 2 , 3 ,
  • Muawia Yousif Fadlelmola Mohamed 2 , 4 ,
  • Dao Ngoc Hien Tam 2 , 5 ,
  • Nguyen Dang Kien 2 , 6 ,
  • Ali Mahmoud Ahmed 2 , 7 &
  • Nguyen Tien Huy 8 , 9 , 10  

Tropical Medicine and Health volume  47 , Article number:  46 ( 2019 ) Cite this article

784k Accesses

287 Citations

93 Altmetric

Metrics details

The massive abundance of studies relating to tropical medicine and health has increased strikingly over the last few decades. In the field of tropical medicine and health, a well-conducted systematic review and meta-analysis (SR/MA) is considered a feasible solution for keeping clinicians abreast of current evidence-based medicine. Understanding of SR/MA steps is of paramount importance for its conduction. It is not easy to be done as there are obstacles that could face the researcher. To solve those hindrances, this methodology study aimed to provide a step-by-step approach mainly for beginners and junior researchers, in the field of tropical medicine and other health care fields, on how to properly conduct a SR/MA, in which all the steps here depicts our experience and expertise combined with the already well-known and accepted international guidance.

We suggest that all steps of SR/MA should be done independently by 2–3 reviewers’ discussion, to ensure data quality and accuracy.

SR/MA steps include the development of research question, forming criteria, search strategy, searching databases, protocol registration, title, abstract, full-text screening, manual searching, extracting data, quality assessment, data checking, statistical analysis, double data checking, and manuscript writing.

Introduction

The amount of studies published in the biomedical literature, especially tropical medicine and health, has increased strikingly over the last few decades. This massive abundance of literature makes clinical medicine increasingly complex, and knowledge from various researches is often needed to inform a particular clinical decision. However, available studies are often heterogeneous with regard to their design, operational quality, and subjects under study and may handle the research question in a different way, which adds to the complexity of evidence and conclusion synthesis [ 1 ].

Systematic review and meta-analyses (SR/MAs) have a high level of evidence as represented by the evidence-based pyramid. Therefore, a well-conducted SR/MA is considered a feasible solution in keeping health clinicians ahead regarding contemporary evidence-based medicine.

Differing from a systematic review, unsystematic narrative review tends to be descriptive, in which the authors select frequently articles based on their point of view which leads to its poor quality. A systematic review, on the other hand, is defined as a review using a systematic method to summarize evidence on questions with a detailed and comprehensive plan of study. Furthermore, despite the increasing guidelines for effectively conducting a systematic review, we found that basic steps often start from framing question, then identifying relevant work which consists of criteria development and search for articles, appraise the quality of included studies, summarize the evidence, and interpret the results [ 2 , 3 ]. However, those simple steps are not easy to be reached in reality. There are many troubles that a researcher could be struggled with which has no detailed indication.

Conducting a SR/MA in tropical medicine and health may be difficult especially for young researchers; therefore, understanding of its essential steps is crucial. It is not easy to be done as there are obstacles that could face the researcher. To solve those hindrances, we recommend a flow diagram (Fig. 1 ) which illustrates a detailed and step-by-step the stages for SR/MA studies. This methodology study aimed to provide a step-by-step approach mainly for beginners and junior researchers, in the field of tropical medicine and other health care fields, on how to properly and succinctly conduct a SR/MA; all the steps here depicts our experience and expertise combined with the already well known and accepted international guidance.

figure 1

Detailed flow diagram guideline for systematic review and meta-analysis steps. Note : Star icon refers to “2–3 reviewers screen independently”

Methods and results

Detailed steps for conducting any systematic review and meta-analysis.

We searched the methods reported in published SR/MA in tropical medicine and other healthcare fields besides the published guidelines like Cochrane guidelines {Higgins, 2011 #7} [ 4 ] to collect the best low-bias method for each step of SR/MA conduction steps. Furthermore, we used guidelines that we apply in studies for all SR/MA steps. We combined these methods in order to conclude and conduct a detailed flow diagram that shows the SR/MA steps how being conducted.

Any SR/MA must follow the widely accepted Preferred Reporting Items for Systematic Review and Meta-analysis statement (PRISMA checklist 2009) (Additional file 5 : Table S1) [ 5 ].

We proposed our methods according to a valid explanatory simulation example choosing the topic of “evaluating safety of Ebola vaccine,” as it is known that Ebola is a very rare tropical disease but fatal. All the explained methods feature the standards followed internationally, with our compiled experience in the conduct of SR beside it, which we think proved some validity. This is a SR under conduct by a couple of researchers teaming in a research group, moreover, as the outbreak of Ebola which took place (2013–2016) in Africa resulted in a significant mortality and morbidity. Furthermore, since there are many published and ongoing trials assessing the safety of Ebola vaccines, we thought this would provide a great opportunity to tackle this hotly debated issue. Moreover, Ebola started to fire again and new fatal outbreak appeared in the Democratic Republic of Congo since August 2018, which caused infection to more than 1000 people according to the World Health Organization, and 629 people have been killed till now. Hence, it is considered the second worst Ebola outbreak, after the first one in West Africa in 2014 , which infected more than 26,000 and killed about 11,300 people along outbreak course.

Research question and objectives

Like other study designs, the research question of SR/MA should be feasible, interesting, novel, ethical, and relevant. Therefore, a clear, logical, and well-defined research question should be formulated. Usually, two common tools are used: PICO or SPIDER. PICO (Population, Intervention, Comparison, Outcome) is used mostly in quantitative evidence synthesis. Authors demonstrated that PICO holds more sensitivity than the more specific SPIDER approach [ 6 ]. SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type) was proposed as a method for qualitative and mixed methods search.

We here recommend a combined approach of using either one or both the SPIDER and PICO tools to retrieve a comprehensive search depending on time and resources limitations. When we apply this to our assumed research topic, being of qualitative nature, the use of SPIDER approach is more valid.

PICO is usually used for systematic review and meta-analysis of clinical trial study. For the observational study (without intervention or comparator), in many tropical and epidemiological questions, it is usually enough to use P (Patient) and O (outcome) only to formulate a research question. We must indicate clearly the population (P), then intervention (I) or exposure. Next, it is necessary to compare (C) the indicated intervention with other interventions, i.e., placebo. Finally, we need to clarify which are our relevant outcomes.

To facilitate comprehension, we choose the Ebola virus disease (EVD) as an example. Currently, the vaccine for EVD is being developed and under phase I, II, and III clinical trials; we want to know whether this vaccine is safe and can induce sufficient immunogenicity to the subjects.

An example of a research question for SR/MA based on PICO for this issue is as follows: How is the safety and immunogenicity of Ebola vaccine in human? (P: healthy subjects (human), I: vaccination, C: placebo, O: safety or adverse effects)

Preliminary research and idea validation

We recommend a preliminary search to identify relevant articles, ensure the validity of the proposed idea, avoid duplication of previously addressed questions, and assure that we have enough articles for conducting its analysis. Moreover, themes should focus on relevant and important health-care issues, consider global needs and values, reflect the current science, and be consistent with the adopted review methods. Gaining familiarity with a deep understanding of the study field through relevant videos and discussions is of paramount importance for better retrieval of results. If we ignore this step, our study could be canceled whenever we find out a similar study published before. This means we are wasting our time to deal with a problem that has been tackled for a long time.

To do this, we can start by doing a simple search in PubMed or Google Scholar with search terms Ebola AND vaccine. While doing this step, we identify a systematic review and meta-analysis of determinant factors influencing antibody response from vaccination of Ebola vaccine in non-human primate and human [ 7 ], which is a relevant paper to read to get a deeper insight and identify gaps for better formulation of our research question or purpose. We can still conduct systematic review and meta-analysis of Ebola vaccine because we evaluate safety as a different outcome and different population (only human).

Inclusion and exclusion criteria

Eligibility criteria are based on the PICO approach, study design, and date. Exclusion criteria mostly are unrelated, duplicated, unavailable full texts, or abstract-only papers. These exclusions should be stated in advance to refrain the researcher from bias. The inclusion criteria would be articles with the target patients, investigated interventions, or the comparison between two studied interventions. Briefly, it would be articles which contain information answering our research question. But the most important is that it should be clear and sufficient information, including positive or negative, to answer the question.

For the topic we have chosen, we can make inclusion criteria: (1) any clinical trial evaluating the safety of Ebola vaccine and (2) no restriction regarding country, patient age, race, gender, publication language, and date. Exclusion criteria are as follows: (1) study of Ebola vaccine in non-human subjects or in vitro studies; (2) study with data not reliably extracted, duplicate, or overlapping data; (3) abstract-only papers as preceding papers, conference, editorial, and author response theses and books; (4) articles without available full text available; and (5) case reports, case series, and systematic review studies. The PRISMA flow diagram template that is used in SR/MA studies can be found in Fig. 2 .

figure 2

PRISMA flow diagram of studies’ screening and selection

Search strategy

A standard search strategy is used in PubMed, then later it is modified according to each specific database to get the best relevant results. The basic search strategy is built based on the research question formulation (i.e., PICO or PICOS). Search strategies are constructed to include free-text terms (e.g., in the title and abstract) and any appropriate subject indexing (e.g., MeSH) expected to retrieve eligible studies, with the help of an expert in the review topic field or an information specialist. Additionally, we advise not to use terms for the Outcomes as their inclusion might hinder the database being searched to retrieve eligible studies because the used outcome is not mentioned obviously in the articles.

The improvement of the search term is made while doing a trial search and looking for another relevant term within each concept from retrieved papers. To search for a clinical trial, we can use these descriptors in PubMed: “clinical trial”[Publication Type] OR “clinical trials as topic”[MeSH terms] OR “clinical trial”[All Fields]. After some rounds of trial and refinement of search term, we formulate the final search term for PubMed as follows: (ebola OR ebola virus OR ebola virus disease OR EVD) AND (vaccine OR vaccination OR vaccinated OR immunization) AND (“clinical trial”[Publication Type] OR “clinical trials as topic”[MeSH Terms] OR “clinical trial”[All Fields]). Because the study for this topic is limited, we do not include outcome term (safety and immunogenicity) in the search term to capture more studies.

Search databases, import all results to a library, and exporting to an excel sheet

According to the AMSTAR guidelines, at least two databases have to be searched in the SR/MA [ 8 ], but as you increase the number of searched databases, you get much yield and more accurate and comprehensive results. The ordering of the databases depends mostly on the review questions; being in a study of clinical trials, you will rely mostly on Cochrane, mRCTs, or International Clinical Trials Registry Platform (ICTRP). Here, we propose 12 databases (PubMed, Scopus, Web of Science, EMBASE, GHL, VHL, Cochrane, Google Scholar, Clinical trials.gov , mRCTs, POPLINE, and SIGLE), which help to cover almost all published articles in tropical medicine and other health-related fields. Among those databases, POPLINE focuses on reproductive health. Researchers should consider to choose relevant database according to the research topic. Some databases do not support the use of Boolean or quotation; otherwise, there are some databases that have special searching way. Therefore, we need to modify the initial search terms for each database to get appreciated results; therefore, manipulation guides for each online database searches are presented in Additional file 5 : Table S2. The detailed search strategy for each database is found in Additional file 5 : Table S3. The search term that we created in PubMed needs customization based on a specific characteristic of the database. An example for Google Scholar advanced search for our topic is as follows:

With all of the words: ebola virus

With at least one of the words: vaccine vaccination vaccinated immunization

Where my words occur: in the title of the article

With all of the words: EVD

Finally, all records are collected into one Endnote library in order to delete duplicates and then to it export into an excel sheet. Using remove duplicating function with two options is mandatory. All references which have (1) the same title and author, and published in the same year, and (2) the same title and author, and published in the same journal, would be deleted. References remaining after this step should be exported to an excel file with essential information for screening. These could be the authors’ names, publication year, journal, DOI, URL link, and abstract.

Protocol writing and registration

Protocol registration at an early stage guarantees transparency in the research process and protects from duplication problems. Besides, it is considered a documented proof of team plan of action, research question, eligibility criteria, intervention/exposure, quality assessment, and pre-analysis plan. It is recommended that researchers send it to the principal investigator (PI) to revise it, then upload it to registry sites. There are many registry sites available for SR/MA like those proposed by Cochrane and Campbell collaborations; however, we recommend registering the protocol into PROSPERO as it is easier. The layout of a protocol template, according to PROSPERO, can be found in Additional file 5 : File S1.

Title and abstract screening

Decisions to select retrieved articles for further assessment are based on eligibility criteria, to minimize the chance of including non-relevant articles. According to the Cochrane guidance, two reviewers are a must to do this step, but as for beginners and junior researchers, this might be tiresome; thus, we propose based on our experience that at least three reviewers should work independently to reduce the chance of error, particularly in teams with a large number of authors to add more scrutiny and ensure proper conduct. Mostly, the quality with three reviewers would be better than two, as two only would have different opinions from each other, so they cannot decide, while the third opinion is crucial. And here are some examples of systematic reviews which we conducted following the same strategy (by a different group of researchers in our research group) and published successfully, and they feature relevant ideas to tropical medicine and disease [ 9 , 10 , 11 ].

In this step, duplications will be removed manually whenever the reviewers find them out. When there is a doubt about an article decision, the team should be inclusive rather than exclusive, until the main leader or PI makes a decision after discussion and consensus. All excluded records should be given exclusion reasons.

Full text downloading and screening

Many search engines provide links for free to access full-text articles. In case not found, we can search in some research websites as ResearchGate, which offer an option of direct full-text request from authors. Additionally, exploring archives of wanted journals, or contacting PI to purchase it if available. Similarly, 2–3 reviewers work independently to decide about included full texts according to eligibility criteria, with reporting exclusion reasons of articles. In case any disagreement has occurred, the final decision has to be made by discussion.

Manual search

One has to exhaust all possibilities to reduce bias by performing an explicit hand-searching for retrieval of reports that may have been dropped from first search [ 12 ]. We apply five methods to make manual searching: searching references from included studies/reviews, contacting authors and experts, and looking at related articles/cited articles in PubMed and Google Scholar.

We describe here three consecutive methods to increase and refine the yield of manual searching: firstly, searching reference lists of included articles; secondly, performing what is known as citation tracking in which the reviewers track all the articles that cite each one of the included articles, and this might involve electronic searching of databases; and thirdly, similar to the citation tracking, we follow all “related to” or “similar” articles. Each of the abovementioned methods can be performed by 2–3 independent reviewers, and all the possible relevant article must undergo further scrutiny against the inclusion criteria, after following the same records yielded from electronic databases, i.e., title/abstract and full-text screening.

We propose an independent reviewing by assigning each member of the teams a “tag” and a distinct method, to compile all the results at the end for comparison of differences and discussion and to maximize the retrieval and minimize the bias. Similarly, the number of included articles has to be stated before addition to the overall included records.

Data extraction and quality assessment

This step entitles data collection from included full-texts in a structured extraction excel sheet, which is previously pilot-tested for extraction using some random studies. We recommend extracting both adjusted and non-adjusted data because it gives the most allowed confounding factor to be used in the analysis by pooling them later [ 13 ]. The process of extraction should be executed by 2–3 independent reviewers. Mostly, the sheet is classified into the study and patient characteristics, outcomes, and quality assessment (QA) tool.

Data presented in graphs should be extracted by software tools such as Web plot digitizer [ 14 ]. Most of the equations that can be used in extraction prior to analysis and estimation of standard deviation (SD) from other variables is found inside Additional file 5 : File S2 with their references as Hozo et al. [ 15 ], Xiang et al. [ 16 ], and Rijkom et al. [ 17 ]. A variety of tools are available for the QA, depending on the design: ROB-2 Cochrane tool for randomized controlled trials [ 18 ] which is presented as Additional file 1 : Figure S1 and Additional file 2 : Figure S2—from a previous published article data—[ 19 ], NIH tool for observational and cross-sectional studies [ 20 ], ROBINS-I tool for non-randomize trials [ 21 ], QUADAS-2 tool for diagnostic studies, QUIPS tool for prognostic studies, CARE tool for case reports, and ToxRtool for in vivo and in vitro studies. We recommend that 2–3 reviewers independently assess the quality of the studies and add to the data extraction form before the inclusion into the analysis to reduce the risk of bias. In the NIH tool for observational studies—cohort and cross-sectional—as in this EBOLA case, to evaluate the risk of bias, reviewers should rate each of the 14 items into dichotomous variables: yes, no, or not applicable. An overall score is calculated by adding all the items scores as yes equals one, while no and NA equals zero. A score will be given for every paper to classify them as poor, fair, or good conducted studies, where a score from 0–5 was considered poor, 6–9 as fair, and 10–14 as good.

In the EBOLA case example above, authors can extract the following information: name of authors, country of patients, year of publication, study design (case report, cohort study, or clinical trial or RCT), sample size, the infected point of time after EBOLA infection, follow-up interval after vaccination time, efficacy, safety, adverse effects after vaccinations, and QA sheet (Additional file 6 : Data S1).

Data checking

Due to the expected human error and bias, we recommend a data checking step, in which every included article is compared with its counterpart in an extraction sheet by evidence photos, to detect mistakes in data. We advise assigning articles to 2–3 independent reviewers, ideally not the ones who performed the extraction of those articles. When resources are limited, each reviewer is assigned a different article than the one he extracted in the previous stage.

Statistical analysis

Investigators use different methods for combining and summarizing findings of included studies. Before analysis, there is an important step called cleaning of data in the extraction sheet, where the analyst organizes extraction sheet data in a form that can be read by analytical software. The analysis consists of 2 types namely qualitative and quantitative analysis. Qualitative analysis mostly describes data in SR studies, while quantitative analysis consists of two main types: MA and network meta-analysis (NMA). Subgroup, sensitivity, cumulative analyses, and meta-regression are appropriate for testing whether the results are consistent or not and investigating the effect of certain confounders on the outcome and finding the best predictors. Publication bias should be assessed to investigate the presence of missing studies which can affect the summary.

To illustrate basic meta-analysis, we provide an imaginary data for the research question about Ebola vaccine safety (in terms of adverse events, 14 days after injection) and immunogenicity (Ebola virus antibodies rise in geometric mean titer, 6 months after injection). Assuming that from searching and data extraction, we decided to do an analysis to evaluate Ebola vaccine “A” safety and immunogenicity. Other Ebola vaccines were not meta-analyzed because of the limited number of studies (instead, it will be included for narrative review). The imaginary data for vaccine safety meta-analysis can be accessed in Additional file 7 : Data S2. To do the meta-analysis, we can use free software, such as RevMan [ 22 ] or R package meta [ 23 ]. In this example, we will use the R package meta. The tutorial of meta package can be accessed through “General Package for Meta-Analysis” tutorial pdf [ 23 ]. The R codes and its guidance for meta-analysis done can be found in Additional file 5 : File S3.

For the analysis, we assume that the study is heterogenous in nature; therefore, we choose a random effect model. We did an analysis on the safety of Ebola vaccine A. From the data table, we can see some adverse events occurring after intramuscular injection of vaccine A to the subject of the study. Suppose that we include six studies that fulfill our inclusion criteria. We can do a meta-analysis for each of the adverse events extracted from the studies, for example, arthralgia, from the results of random effect meta-analysis using the R meta package.

From the results shown in Additional file 3 : Figure S3, we can see that the odds ratio (OR) of arthralgia is 1.06 (0.79; 1.42), p value = 0.71, which means that there is no association between the intramuscular injection of Ebola vaccine A and arthralgia, as the OR is almost one, and besides, the P value is insignificant as it is > 0.05.

In the meta-analysis, we can also visualize the results in a forest plot. It is shown in Fig. 3 an example of a forest plot from the simulated analysis.

figure 3

Random effect model forest plot for comparison of vaccine A versus placebo

From the forest plot, we can see six studies (A to F) and their respective OR (95% CI). The green box represents the effect size (in this case, OR) of each study. The bigger the box means the study weighted more (i.e., bigger sample size). The blue diamond shape represents the pooled OR of the six studies. We can see the blue diamond cross the vertical line OR = 1, which indicates no significance for the association as the diamond almost equalized in both sides. We can confirm this also from the 95% confidence interval that includes one and the p value > 0.05.

For heterogeneity, we see that I 2 = 0%, which means no heterogeneity is detected; the study is relatively homogenous (it is rare in the real study). To evaluate publication bias related to the meta-analysis of adverse events of arthralgia, we can use the metabias function from the R meta package (Additional file 4 : Figure S4) and visualization using a funnel plot. The results of publication bias are demonstrated in Fig. 4 . We see that the p value associated with this test is 0.74, indicating symmetry of the funnel plot. We can confirm it by looking at the funnel plot.

figure 4

Publication bias funnel plot for comparison of vaccine A versus placebo

Looking at the funnel plot, the number of studies at the left and right side of the funnel plot is the same; therefore, the plot is symmetry, indicating no publication bias detected.

Sensitivity analysis is a procedure used to discover how different values of an independent variable will influence the significance of a particular dependent variable by removing one study from MA. If all included study p values are < 0.05, hence, removing any study will not change the significant association. It is only performed when there is a significant association, so if the p value of MA done is 0.7—more than one—the sensitivity analysis is not needed for this case study example. If there are 2 studies with p value > 0.05, removing any of the two studies will result in a loss of the significance.

Double data checking

For more assurance on the quality of results, the analyzed data should be rechecked from full-text data by evidence photos, to allow an obvious check for the PI of the study.

Manuscript writing, revision, and submission to a journal

Writing based on four scientific sections: introduction, methods, results, and discussion, mostly with a conclusion. Performing a characteristic table for study and patient characteristics is a mandatory step which can be found as a template in Additional file 5 : Table S3.

After finishing the manuscript writing, characteristics table, and PRISMA flow diagram, the team should send it to the PI to revise it well and reply to his comments and, finally, choose a suitable journal for the manuscript which fits with considerable impact factor and fitting field. We need to pay attention by reading the author guidelines of journals before submitting the manuscript.

The role of evidence-based medicine in biomedical research is rapidly growing. SR/MAs are also increasing in the medical literature. This paper has sought to provide a comprehensive approach to enable reviewers to produce high-quality SR/MAs. We hope that readers could gain general knowledge about how to conduct a SR/MA and have the confidence to perform one, although this kind of study requires complex steps compared to narrative reviews.

Having the basic steps for conduction of MA, there are many advanced steps that are applied for certain specific purposes. One of these steps is meta-regression which is performed to investigate the association of any confounder and the results of the MA. Furthermore, there are other types rather than the standard MA like NMA and MA. In NMA, we investigate the difference between several comparisons when there were not enough data to enable standard meta-analysis. It uses both direct and indirect comparisons to conclude what is the best between the competitors. On the other hand, mega MA or MA of patients tend to summarize the results of independent studies by using its individual subject data. As a more detailed analysis can be done, it is useful in conducting repeated measure analysis and time-to-event analysis. Moreover, it can perform analysis of variance and multiple regression analysis; however, it requires homogenous dataset and it is time-consuming in conduct [ 24 ].

Conclusions

Systematic review/meta-analysis steps include development of research question and its validation, forming criteria, search strategy, searching databases, importing all results to a library and exporting to an excel sheet, protocol writing and registration, title and abstract screening, full-text screening, manual searching, extracting data and assessing its quality, data checking, conducting statistical analysis, double data checking, manuscript writing, revising, and submitting to a journal.

Availability of data and materials

Not applicable.

Abbreviations

Network meta-analysis

Principal investigator

Population, Intervention, Comparison, Outcome

Preferred Reporting Items for Systematic Review and Meta-analysis statement

Quality assessment

Sample, Phenomenon of Interest, Design, Evaluation, Research type

Systematic review and meta-analyses

Bello A, Wiebe N, Garg A, Tonelli M. Evidence-based decision-making 2: systematic reviews and meta-analysis. Methods Mol Biol (Clifton, NJ). 2015;1281:397–416.

Article   Google Scholar  

Khan KS, Kunz R, Kleijnen J, Antes G. Five steps to conducting a systematic review. J R Soc Med. 2003;96(3):118–21.

Rys P, Wladysiuk M, Skrzekowska-Baran I, Malecki MT. Review articles, systematic reviews and meta-analyses: which can be trusted? Polskie Archiwum Medycyny Wewnetrznej. 2009;119(3):148–56.

PubMed   Google Scholar  

Higgins JPT, Green S. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. 2011.

Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339:b2535.

Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi S. PICO, PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv Res. 2014;14:579.

Gross L, Lhomme E, Pasin C, Richert L, Thiebaut R. Ebola vaccine development: systematic review of pre-clinical and clinical studies, and meta-analysis of determinants of antibody response variability after vaccination. Int J Infect Dis. 2018;74:83–96.

Article   CAS   Google Scholar  

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, ... Henry DA. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017;358:j4008.

Giang HTN, Banno K, Minh LHN, Trinh LT, Loc LT, Eltobgy A, et al. Dengue hemophagocytic syndrome: a systematic review and meta-analysis on epidemiology, clinical signs, outcomes, and risk factors. Rev Med Virol. 2018;28(6):e2005.

Morra ME, Altibi AMA, Iqtadar S, Minh LHN, Elawady SS, Hallab A, et al. Definitions for warning signs and signs of severe dengue according to the WHO 2009 classification: systematic review of literature. Rev Med Virol. 2018;28(4):e1979.

Morra ME, Van Thanh L, Kamel MG, Ghazy AA, Altibi AMA, Dat LM, et al. Clinical outcomes of current medical approaches for Middle East respiratory syndrome: a systematic review and meta-analysis. Rev Med Virol. 2018;28(3):e1977.

Vassar M, Atakpo P, Kash MJ. Manual search approaches used by systematic reviewers in dermatology. Journal of the Medical Library Association: JMLA. 2016;104(4):302.

Naunheim MR, Remenschneider AK, Scangas GA, Bunting GW, Deschler DG. The effect of initial tracheoesophageal voice prosthesis size on postoperative complications and voice outcomes. Ann Otol Rhinol Laryngol. 2016;125(6):478–84.

Rohatgi AJaiWa. Web Plot Digitizer. ht tp. 2014;2.

Hozo SP, Djulbegovic B, Hozo I. Estimating the mean and variance from the median, range, and the size of a sample. BMC Med Res Methodol. 2005;5(1):13.

Wan X, Wang W, Liu J, Tong T. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range. BMC Med Res Methodol. 2014;14(1):135.

Van Rijkom HM, Truin GJ, Van’t Hof MA. A meta-analysis of clinical studies on the caries-inhibiting effect of fluoride gel treatment. Carries Res. 1998;32(2):83–92.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, et al. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Tawfik GM, Tieu TM, Ghozy S, Makram OM, Samuel P, Abdelaal A, et al. Speech efficacy, safety and factors affecting lifetime of voice prostheses in patients with laryngeal cancer: a systematic review and network meta-analysis of randomized controlled trials. J Clin Oncol. 2018;36(15_suppl):e18031-e.

Wannemuehler TJ, Lobo BC, Johnson JD, Deig CR, Ting JY, Gregory RL. Vibratory stimulus reduces in vitro biofilm formation on tracheoesophageal voice prostheses. Laryngoscope. 2016;126(12):2752–7.

Sterne JAC, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355.

RevMan The Cochrane Collaboration %J Copenhagen TNCCTCC. Review Manager (RevMan). 5.0. 2008.

Schwarzer GJRn. meta: An R package for meta-analysis. 2007;7(3):40-45.

Google Scholar  

Simms LLH. Meta-analysis versus mega-analysis: is there a difference? Oral budesonide for the maintenance of remission in Crohn’s disease: Faculty of Graduate Studies, University of Western Ontario; 1998.

Download references

Acknowledgements

This study was conducted (in part) at the Joint Usage/Research Center on Tropical Disease, Institute of Tropical Medicine, Nagasaki University, Japan.

Author information

Authors and affiliations.

Faculty of Medicine, Ain Shams University, Cairo, Egypt

Gehad Mohamed Tawfik

Online research Club http://www.onlineresearchclub.org/

Gehad Mohamed Tawfik, Kadek Agus Surya Dila, Muawia Yousif Fadlelmola Mohamed, Dao Ngoc Hien Tam, Nguyen Dang Kien & Ali Mahmoud Ahmed

Pratama Giri Emas Hospital, Singaraja-Amlapura street, Giri Emas village, Sawan subdistrict, Singaraja City, Buleleng, Bali, 81171, Indonesia

Kadek Agus Surya Dila

Faculty of Medicine, University of Khartoum, Khartoum, Sudan

Muawia Yousif Fadlelmola Mohamed

Nanogen Pharmaceutical Biotechnology Joint Stock Company, Ho Chi Minh City, Vietnam

Dao Ngoc Hien Tam

Department of Obstetrics and Gynecology, Thai Binh University of Medicine and Pharmacy, Thai Binh, Vietnam

Nguyen Dang Kien

Faculty of Medicine, Al-Azhar University, Cairo, Egypt

Ali Mahmoud Ahmed

Evidence Based Medicine Research Group & Faculty of Applied Sciences, Ton Duc Thang University, Ho Chi Minh City, 70000, Vietnam

Nguyen Tien Huy

Faculty of Applied Sciences, Ton Duc Thang University, Ho Chi Minh City, 70000, Vietnam

Department of Clinical Product Development, Institute of Tropical Medicine (NEKKEN), Leading Graduate School Program, and Graduate School of Biomedical Sciences, Nagasaki University, 1-12-4 Sakamoto, Nagasaki, 852-8523, Japan

You can also search for this author in PubMed   Google Scholar

Contributions

NTH and GMT were responsible for the idea and its design. The figure was done by GMT. All authors contributed to the manuscript writing and approval of the final version.

Corresponding author

Correspondence to Nguyen Tien Huy .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:.

Figure S1. Risk of bias assessment graph of included randomized controlled trials. (TIF 20 kb)

Additional file 2:

Figure S2. Risk of bias assessment summary. (TIF 69 kb)

Additional file 3:

Figure S3. Arthralgia results of random effect meta-analysis using R meta package. (TIF 20 kb)

Additional file 4:

Figure S4. Arthralgia linear regression test of funnel plot asymmetry using R meta package. (TIF 13 kb)

Additional file 5:

Table S1. PRISMA 2009 Checklist. Table S2. Manipulation guides for online database searches. Table S3. Detailed search strategy for twelve database searches. Table S4. Baseline characteristics of the patients in the included studies. File S1. PROSPERO protocol template file. File S2. Extraction equations that can be used prior to analysis to get missed variables. File S3. R codes and its guidance for meta-analysis done for comparison between EBOLA vaccine A and placebo. (DOCX 49 kb)

Additional file 6:

Data S1. Extraction and quality assessment data sheets for EBOLA case example. (XLSX 1368 kb)

Additional file 7:

Data S2. Imaginary data for EBOLA case example. (XLSX 10 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Tawfik, G.M., Dila, K.A.S., Mohamed, M.Y.F. et al. A step by step guide for conducting a systematic review and meta-analysis with simulation data. Trop Med Health 47 , 46 (2019). https://doi.org/10.1186/s41182-019-0165-6

Download citation

Received : 30 January 2019

Accepted : 24 May 2019

Published : 01 August 2019

DOI : https://doi.org/10.1186/s41182-019-0165-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Tropical Medicine and Health

ISSN: 1349-4147

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

systematic review and meta analysis research design

  • Open access
  • Published: 10 September 2014

Viewing systematic reviews and meta-analysis in social research through different lenses

  • Jacqueline Davis 1 ,
  • Kerrie Mengersen 2 ,
  • Sarah Bennett 1 &
  • Lorraine Mazerolle 1  

SpringerPlus volume  3 , Article number:  511 ( 2014 ) Cite this article

25k Accesses

95 Citations

9 Altmetric

Metrics details

Systematic reviews and meta-analyses are used to combine results across studies to determine an overall effect. Meta-analysis is especially useful for combining evidence to inform social policy, but meta-analyses of applied social science research may encounter practical issues arising from the nature of the research domain. The current paper identifies potential resolutions to four issues that may be encountered in systematic reviews and meta-analyses in social research. The four issues are: scoping and targeting research questions appropriate for meta-analysis; selecting eligibility criteria where primary studies vary in research design and choice of outcome measures; dealing with inconsistent reporting in primary studies; and identifying sources of heterogeneity with multiple confounded moderators. The paper presents an overview of each issue with a review of potential resolutions, identified from similar issues encountered in meta-analysis in medical and biological sciences. The discussion aims to share and improve methodology in systematic reviews and meta-analysis by promoting cross-disciplinary communication, that is, to encourage ‘viewing through different lenses’.

Systematic reviews and meta-analyses are increasingly important techniques in social science research. These techniques are used to synthesise research results to determine an overall effect estimate for a population of studies. A systematic review refers to the process of systematically locating and collating all available information on an effect. Meta-analysis refers to the statistical techniques used to combine this information to give an overall estimate of the effect in the population. Together, systematic reviews and meta-analyses can help to clarify the state of a field of research, determine whether an effect is constant across studies, and discover what future studies are required to demonstrate the effect. Advanced meta-analysis techniques can also be used to discover what study-level or sample characteristics have an effect on the phenomenon being studied; for example, whether studies conducted in one cultural context show significantly different results from studies conducted in other cultural contexts.

Although meta-analysis was originally devised for use in the social sciences (Glass, 1976 ), the technique was quickly adopted and further developed for use in the medical sciences. Currently the medical sciences produce the majority of the literature on meta-analysis, including meta-analysis methods. In the social sciences, the use of meta-analysis is rapidly increasing (Figure  1 ), with meta-analysis being applied to an ever-broader range of subject matter. The application of meta-analysis to social science research, however, is not necessarily straightforward, and methods developed in medical research may be difficult to access and apply to social research, especially for applied researchers seeking to use meta-analysis in their own disciplines for the first time.

figure 1

Results of a Scopus search for “meta-analysis in title, abstract and keywords.

A number of techniques and processes, each requiring methodological choices, fall under the umbrella term ‘meta-analysis’. With the diversity of new applications for meta-analysis, new issues in implementing the methodology have arisen. Some of these issues have been addressed by review co-coordinating bodies, and recommendations have been made on how to deal with them; for example, the issue of publication or small-study bias has been carefully addressed (Higgins & Green, 2011 ). Other problems seem to have been raised independently in different disciplines, with a lack of overarching consensus on how to resolve them, and individual study authors applying ad hoc resolutions as they encounter each issue. Indeed, it is difficult for even experienced meta-analysts to follow ongoing methodological and technical debates and keep up with latest findings, especially across different substantive disciplines (Schulze, 2007 ). This lack of communication is particularly acute in disciplines that have only recently begun to use meta-analysis and where research data are less structured than in clinical disciplines. In these cases, researchers may look across disciplines, to view meta-analysis through other disciplinary lenses, and see the similarity between issues encountered in their own reviews and issues that have been encountered, and addressed, in the work of others.

The current paper reviews four practical issues that may be encountered in meta-analyses of applied social science research, and presents a multidisciplinary review of some approaches that have been used to resolve these. The first issue is scoping and targeting the systematic review to ensure that the question is appropriate for meta-analysis. The second is choosing eligibility criteria for included studies, in the absence of consensus on valid evaluation designs and appropriate outcome measures within the primary studies. The third is dealing with inconsistent reporting styles in the body of primary research, which greatly increase the difficulty of meta-analysis, any analysis of heterogeneity, and the application of any statistical tests or corrections. The final issue is attempting moderator analysis in the presence of multiple confounded study-level moderators.

The intent of the following sections is to provide context and guidance to applied researchers seeking to use meta-analysis to synthesise research in their own domains, to inform their own research or to provide guidance for social policy. Each issue is presented with a brief description and an example, followed by options for addressing the issue, with an effort to include alternatives from multiple academic disciplines. This discussion is not intended to provide a full guide to meta-analysis, but instead, to highlight areas of research that may offer assistance to reviewers encountering one or more of these issues in the process of their own systematic review and meta-analysis.

Results and discussion

Issue 1. scoping and targeting the review.

Meta-analysis having been defined as a “gold standard” for evidence-based practice in medicine (Sackett et al. 2000 ), and the increasing number of meta-analyses on a wide variety of topics, may give the impression that meta-analysis is the best technique to answer any research question in any field. Meta-analysis is, however, a specific statistical technique, and like any statistical technique is only appropriate for a narrow range of research questions and data. Scoping decisions have been addressed elsewhere, including choosing between broad and narrow inclusion criteria (see Issue 2, below), and whether to take a “black box” effectiveness-led approach or to focus on the specific causal pathways of an intervention (Stewart et al. 2012 ). A further, less well-addressed, issue is scoping a meta-analysis in a research area dominated by few large-scale empirical studies.

Many fields, including ecology, medicine, public health, and criminology, face the problem of a single large study dominating a particular field of research. Manipulating policy may be a practically and ethically difficult, and so tests of policy effectiveness may come in the form of quasi-experimental studies where the comparison group is a different geographical area, different time point, or waiting list. When a randomised field trial is conducted, it may be of very large scale compared to non-randomised evaluations of the same intervention, because of the resources required to establish agency partnerships; for example, initiating a field trial of a policing strategy requires cooperation of many agencies, including police officers, police management, multiple levels of government, as well as administrative and funding bodies. The dominance of such large-scale trials can result in a broad area of literature being based on very few independent studies, with multiple scientific articles resulting from the same dataset.

A possible result of this dominant study phenomenon is a disproportionate sense of optimism around the strength of evidence for a particular intervention or theory. Many papers stemming from few empirical studies is problematic for meta-analysis because the technique requires each observation included in the meta-analysis to be independent, so the true number of effect sizes available for meta-analysis may be much smaller than it first appears. Further problems can arise when the large-scale trials are systematically different to smaller trials, due to the different requirements of each; for example, when randomised trials are large and non-randomised trials are small, it may be difficult to tell whether differences in results are due to the size of the trial or the randomisation.

An illustration of the dominant study issue is given by Mazerolle et al. ( 2013 ) in a legitimacy policing review. A number of observational surveys, combined with multiple publications reporting on a large-scale field trial (Sherman et al. 1998 ) and a great deal of theoretical debate, produced an impression of a large amount of evidence supporting the effectiveness of legitimacy interventions in policing. When an attempt was made to locate this evidence, however, few studies were identified that tested actual interventions, and very few were randomised. The authors resolved their issue by developing a tailored set of standards for eligible comparison groups, including study design in a moderator analysis, and proceeding with the meta-analysis. However, other reviewers may have decided that the available evidence was not sufficient for meta-analysis.

Deciding whether evidence is suitable for meta-analysis is, at present, a question of judgement, and what is suitable evidence in one discipline may be unacceptable in another (Koricheva et al. 2013 : Ioannidis et al. 2008 ). Questions for which meta-analysis is not suited may be better addressed using a traditional narrative review or an alternative, such as best-evidence synthesis (Slavin 1987 ), thematic synthesis (Thomas & Harden, 2008 ), interpretive synthesis (Dixon-Woods et al. 2006 ), or scoping reviews (Arksey & O’Malley, 2005 ). These techniques may also be used as a broader background to a more focused meta-analysis, enabling both a broad review of a field and a statistically rigorous analysis of a subset of comparable studies.

Once researchers have scoped their review appropriately for meta-analysis, they may choose to register with a peer review group, or at least follow the published guidelines of such a group. The choice of which guidelines to follow should be directed by careful consideration of both the substantive topic of the review, and the likely methodological issues that may arise in the review process. Such consideration is necessary because review groups may differ in their recommendations, both for specific methodological issues and for the general review process.

Figure  2 presents a summary of the steps defined by a number of distinguished review coordinating groups and experts. The Cochrane Collaboration is a premier medical systematic review co-ordinating group and peer review team that publishes a regularly updated Handbook for systematic review and meta-analysis procedures (Higgins & Green, 2011 ). The Cochrane Handbook is recommended by the Campbell Collaboration, a related review coordinating group focusing on social sciences. Other organisations also publish guidelines on conduct and methods for systematic reviews, including the York Centre for Reviews and Dissemination, the Evidence for Policy and Practice Information and Co-ordinating Centre (EPP-I Centre) at the University of London, and the Berkeley Systematic Reviews Group at the University of California. The figure collates information based on the contents sections of each organisation’s publication, with the assumption that the contents would provide some indication of what each organisation considered to be the primary sections of a review, and roughly in what order they should be considered.

figure 2

Steps in a meta-analysis. Note: These steps were taken from the contents section of the relevant handbook.

As seen in Figure  2 , even a small survey of these very well recognised groups produces a range of instructions regarding where to start, where to finish, the number of steps involved, what those steps consist of, and what is the best order for review stages. Each group gives a starting point, and agrees that synthesis is a key step. However, the recommendations differ in detail, especially on the archiving and dissemination activities expected of the reviewers. This difference in focus is partially due to the differing concerns of each discipline. Meta-analyses in medicine (the focus of the Cochrane Handbook) are aimed primarily at informing medical practitioners’ decision making (Higgins & Green, 2011 ), and as such focus on homogeneity of included sources and dissemination of treatment efficacy. In contrast, meta-analyses in social sciences and ecology may focus on identifying and describing heterogeneity, for the purposes of understanding the causal processes at work in the phenomenon under study (Koricheva et al. 2013 ). These differences in focus give rise to diverse perspectives on problems and, subsequently, can provide multiple “lenses” through which to view issues in meta-analysis.

Issue 2. Appropriate eligibility criteria for primary studies

2.1. eligibility criteria for study designs.

Systematic reviews in social research often encounter non-randomised evaluation designs. In social sciences, trials with a representative population may be considered more valuable than laboratory studies because of their ecological validity (Sampson, 2010 ). In addition, it is often not ethical or legal to randomise assignment to treatment and comparison conditions in social science experiments (Baunach, 1980 ). Furthermore, practical constraints may limit the implementation of such experiments, especially if they require the co-operation of multiple agencies which can be time- and resource-intensive to establish and maintain. Therefore, trials of new interventions in many areas of social science are often quasi-experimental or interrupted time series, or simple correlation designs.

As such systematic reviews in social science disciplines may need to deal with a variety of primary study designs. The medical literature generally advises against combining field trials with laboratory trials, or combining non-randomised designs with randomised designs (Higgins & Green, 2011 ; Shadish & Myers, 2004 ). One reason forwarded in support of the separation of randomised and non-randomised designs in meta-analysis is based on randomisation as a quality indicator; that is, evaluations using randomised designs must be of high quality, and therefore more likely than non-randomised (low quality) designs to show a real effect. However, treatment randomisation does not prevent difficulties of causal inference. Non-random sampling, differential attrition, and lack of treatment integrity can introduce alternate explanations for treatment effects even in randomised trials (Littell, 2004 ; Sampson, 2010 ; Shadish et al. 2002 ). Therefore, some authors argue, the selection of studies should be based on their likely validity in the context of the research question (Littell et al. 2008 ). Moreover, it is apparent that meta-analysis of a subset of available studies has the potential to lead to less accurate, biased or otherwise misleading results. These factors have led reviewers to follow a ‘best available’ inclusion strategy when selecting study designs to combine in a meta-analysis (Higgins & Green, 2011 ), or to use a combination of narrative review and meta-analysis to attempt to cover the range of evidence on a topic (Stewart et al. 2012 ). In general, this issue appears to be approached on an ad hoc basis, and methods for combining studies with different evaluative designs are still being developed.

The above discussion suggests that when conducting a systematic review on any question, all of the likely sources of bias in a corpus of studies should be considered before deciding whether or not to exclude studies based on randomisation. In order to obtain the best possible estimate of an intervention’s effectiveness, it may be necessary to review sources that investigate the problem using multiple methodologies. It is therefore useful to include conclusions from qualitative studies, information about treatment integrity and difficulties of implementation, and other non-empirical information arising from primary studies in a narrative review section that complements the meta-analysis results. Social researchers have developed detailed methods for conducting this type of mixed-methods systematic review (Harden & Thomas, 2005 ).

2.2 Eligibility criteria for outcome measures

In any evaluation, different outcomes are considered to be important by different stakeholders. In criminology, for example, reoffending is a quantitative outcome valued by police, while politicians may be more interested in public satisfaction with a particular intervention process. Scientists may also seek information on outcomes relevant to their theoretical model of the intervention effect, and process evaluation and cost-benefit analysis require further relevant but very different outcomes. In addition, each of these outcomes may be measured in a number of ways. Selecting relevant outcomes, therefore, is one hurdle that needs to be addressed in any meta-analysis.

A second hurdle is how to deal with studies that report multiple relevant outcomes based on the same sample. Large-scale social experiments may capitalise on the rare data collection opportunity by collecting information on a very wide range of outcomes. The number of outcomes thus arising from a single trial presents a challenge for a reviewer seeking the most relevant measure of intervention effectiveness across multiple studies. This challenge may extend to a number of issues: multiple different measures of the same construct within a study, a single study providing distinct outcomes measured in different units, and a lack of consistency among studies concerning what outcomes should be measured and how to measure them.

As illustration, these considerations raised a number of questions in the policing review of Mazerolle et al. ( 2013 ). Their questions included the following. Can we combine reoffending with participant satisfaction in an evaluation of policing programs? Can we combine participants’ ratings of satisfaction and perceived fairness of the process? Can we combine two different studies’ measures of satisfaction, if they use different questions with different scales and if so, how is such a combination to be achieved? If we don’t combine all the outcomes, should we just pick one? How do we decide which is important and which to discard? What about the studies that do not report on the outcomes we selected but still evaluate the program; should their information just be lost? If we want to investigate multiple outcomes, is it legitimate to perform multiple meta-analyses, in light of concerns about multiple testing and correlated outcomes? Questions like these are not limited to criminology, but may arise more often in research fields where data collection is complex or difficult, such as international development (Waddington et al. 2012 ). A potential solution, as employed in a systematic review of violent crime interventions in developing countries (Higginson et al. 2013 ), is to use a series of meta-analyses to test specific outcomes within a broader narrative review of outcomes more broadly defined.

A related issue is non-standard measurement of outcomes. Primary studies may present differences in terminology and operational definitions, failure to provide scale validity and reliability scores, and heuristic measurement methods. For example, these problems were observed by Mazerolle et al. ( 2013 ) in a review of policing programs, where public perception of the police was a key outcome. One reviewed study (Skogan & Steiner, 2004 ) measured perceptions of police with ten items assessing dimensions of police demeanour, responsiveness, and performance, while another (Hall, 1987 ) measured perceptions of police using a single item: ‘The Santa Ana Police Department is effective’. A third study (Ren et al. 2005 ) identified confidence as a key outcome and measured it with seven items asking whether officers were fair, courteous, honest, not intimidating, worked with citizens, treated citizens equally, and showed concern; while a fourth (Murphy et al. 2008 ) used four items measuring confidence in police, police professionalism, whether police do their job well, and respect for police, and called that legitimacy. Some authors had reported statistics for individual items (e.g., Sherman et al. 1998 ) while other authors had only reported statistics for an aggregate scale (e.g., Ren et al. 2005 ).

In some fields, where key outcomes are latent and must be observed through constructed questionnaires, this difficulty is obvious. In fields where outcomes are observed more directly, their selection and combination may be more straightforward. Nevertheless, examples of this issue can be found across many disciplines: in biology, plumage colouration could be measured with melanin-based or carotenoid-based coloration, further split into saturation, brightness, and hue (Badyaev & Hill, 2000 ); in medicine, treatment effectiveness could be measured with survival or quality-adjusted life years (Grann et al. 2002 ) education interventions could be assessed with students’ social and emotional skills, attitudes, social behaviour, conduct problems, mental health, or academic performance (Durlak et al. 2011 ), for example. The common difficulty faced by reviewers across different disciplines is how to identify, select, and combine relevant outcomes for a meta-analysis.

Across the disciplines, different methods have been put forward for resolving the difficulty of identifying and combining relevant outcomes. One possibility is to simply combine outcomes into one large estimate of treatment effect. However, there is an inherent danger in this approach, of combining outcomes that may be affected in different directions by the treatment, producing an overall effect size that is somewhere in the middle but does not provide a full picture of the intervention’s effects. In addition, the interpretation of such an effect size may be very difficult. In ecology, some researchers argue that each question demands a unique outcome measurement that is defined by the research question and the context of the ecological process (Osenberg et al. 1999 ).

In social sciences, many reviews present multiple effect sizes for different outcomes separately (e.g., Hedges et al. 1994 ). This approach aids clarity of interpretation, but it also presents two key difficulties. The first is the selection of appropriate outcomes to present. In a field where studies may measure a great number of outcomes for a single intervention, it is untenable to present separate effect sizes for them all – the huge number of outcomes would erase any previous advantage in ease of interpretation. Second, and more seriously, testing a very large number of outcomes for significance of effect size raises the probability of a Type 1 error beyond the acceptable threshold. Finally, in most cases the multiple outcomes are unlikely to be independent of one another, in which case presenting multiple outcome effect sizes may mislead as to the true effectiveness of an intervention.

Sophisticated statistical methods have been put forward to deal with the problem of multiple outcomes. For example, multivariate approaches demonstrated by Becker ( 2000 ) provide estimated correlations among outcome measures as well as a correction for missing outcome measures in some trials. These methods address many of the concerns above. Unfortunately, they are rarely applied by practitioners who encounter these issues in a review. Reasons for this lack of application may include a lack of awareness about the availability of these methods, or their increased statistical complexity. Additionally, the information required by these approaches may not be available in primary research in areas where data reporting requirements are not standardised (see the following section of this paper). For example, multivariate meta-analysis requires estimates of the correlations among outcomes reported within each study, information that is very rarely available (Riley, 2009 ). As computational technology, statistical familiarity, and data reporting standards improve, these solutions hopefully will become more accessible, and thus more widely used. At present, the recommendation of the Campbell Collaboration is simply to use ‘some procedure’ to account for outcome dependence, and to provide detailed description and justification for the choice of procedure (Becker et al. 2004 ).

In light of the above discussion, the most sensible course for reviewers appears to be to decide a priori what outcomes are important and what definitions count towards each outcome, as is recommended by the Cochrane Collaboration (Higgins & Green, 2011 ). Reviewers can seek out the opinions of experts in the field of primary research to help determine what outcomes are useful. It is also suggested that reviewers consult a methodology expert to help determine which outcome measures are feasible to combine and how best to account for non-independence among outcomes. These consultations may be most helpful at the scoping stage of the review.

Issue 3. Data reporting in primary studies

Reporting standards vary among journals, and more generally among unpublished theses and reports. Each evaluation can have a different style for reporting analysis strategy and results, study design, and implementation issues. Many studies focus solely on statistical significance of the results, while others only report descriptive statistics. In addition, studies may report results only as “not significant”, and omit test statistics, descriptive statistics, or direction of effect thereafter.

This problem has been remarked upon by reviewers in many disciplines (Johansen & Gotzsche, 1999 ; Lipsey et al. 2006 ; Littell, 2004 ). Reporting guidelines now exist for randomised controlled trials (CONSORT) (Schulz et al. 2010 ), observational studies (STROBE) (Von Elm et al. 2007 ), non-randomised evaluation studies (TREND) (Des Jarlais et al. 2004 ), economic studies (Mason & Drummond, 1995 ), psychological studies (American Psychological Association, 2010 ), self-report data (Stone & Shiffman, 2002 ), and animal studies (Kilkenny et al. 2010 ). Online databases, such as the EQUATOR website (Simera et al. 2010 ) and the MIBBI project (Taylor et al. 2008 ), have been established to provide regularly updated lists of reporting guidelines and minimum information checklists. These guidelines are useful resources for reviewers seeking to understand the type of information that might be reported in primary studies in their field. Whether this information is actually reported, however, is not guaranteed, and may vary by discipline. For example, most psychology journals require articles to be submitted according to American Psychological Association guidelines, including a minimum reporting standard for methods and results; in contrast, biology journals have a wide range of reporting requirements, which vary from journal to journal. In most disciplines, it is likely that reviewers will encounter the issue of incomplete data reporting at some point.

To address the issue of incomplete data reporting, it may be possible to contact authors for further clarification and data regarding their analyses. Where this fails it may be possible to make assumptions about the direction of effect or the experimental design based on the information provided in the document. In some cases it is feasible to back-transition from test statistics to obtain a measure of effect size, using procedures outlined in the meta-analysis texts of (Borenstein et al. 2009 ), Lipsey & Wilson ( 2001 ), and others. None of these procedures, however, can address the ambiguous direction of effect that may result from the primary study reporting a statistical test as “not significant”.

A key consideration, especially when dealing with effect sizes that are not reported because the test statistic was “not significant”, is the potential bias introduced by simply discarding studies with incomplete information. Studies are presumably more likely to fail to adequately report an effect size when it is close to zero, than when it is relatively large (or at least statistically “significant”). Discarding these studies, then, is likely to result in an upwardly biased overall meta-analysis estimate, because the included effect estimates will be, in aggregate, higher than the excluded ones. A similar problem may occur when authors attempt to treat incomplete reporting as missing data, and to estimate values for the missing data using a standard imputation procedure. Most standard imputation procedures assume that missing data are missing at random (Enders, 2010 ), and this assumption does not hold for missing studies due to incomplete reporting in meta-analysis. The probability of a study effect estimate being missing due to incomplete reporting is directly related to the value of the missing number, with missing numbers likely to be smaller than observed ones. Thus, any imputed data are likely to be upwardly biased, and thereby bias the overall meta-analysis result.

The issue of inconsistent data reporting can most satisfactorily be addressed by the areas of primary research. It is mentioned here as a commonly encountered and highly frustrating problem in meta-analysis in many fields. For this reason, systematic review teams are advised to engage a statistician to help with complex effect size calculations. Meta-analysis reports should record in detail which studies were excluded due to incomplete data, and exactly what calculations were used to compute effect sizes within each study, and including this information as an appendix to the meta-analysis report (e.g. as done in Mazerolle et al. 2013 ). Whatever alternative is taken up, the results may be validated by assessing the sensitivity of the overall meta-analysis to the method of dealing with missing and incomplete data.

Issue 4. Sources of heterogeneity

One of the primary research questions in social science meta-analysis is how the effects of a particular treatment or intervention differ across key variables of interest; for example, are the effects of school-based drug prevention programs different for schools in low-income and high-income areas, or for pupils of different ages, or genders? Meta-analysis offers a unique opportunity to explore the answers to these questions by comparing treatment results across a range of studies. However there can be difficulties in determining what effects are due to the treatment or intervention studied, and what effects are due to study-level variables such as study design, measurement methods, or study sample. This has been identified as an issue in many disciplines including social sciences (Lipsey, 2003 ), epidemiology (Boily et al. 2009 ), and ecology (Osenberg et al. 1999 ).

A serious issue may arise when sources of heterogeneity in effect sizes are difficult to isolate. For example, if most studies using a particular variation on an intervention also use a particular measure of the intervention effect, it may be difficult to separate the effect of the intervention variation from potential artefacts of the measurement method. In a standard regression, predictor variables that vary systematically are referred to as “confounded”. This terminology is adopted for the purposes of the following discussion; specifically, study-level characteristics that vary systematically to some degree are considered “confounded” (Lipsey, 2003 ).

Detecting confounded moderators may be straightforward in a meta-analysis of few studies. A table of key characteristics may be sufficient to reveal the degree to which study characteristics vary systematically (e.g. Mazerolle et al. 2013 ). In a larger sample of studies, however, this type of confounding may be more difficult to detect, and may result in inaccurate results of meta-regression (Viechtbauer, 2007 ) and other heterogeneity tests. Visualisation tools, such as meta-regression plots, may be useful in attempting to detect confounding. Comparing the results of meta-regression analyses with a single predictor to meta-regression with multiple predictors may also help to reveal the degree of confounding present. If study-level variables are good predictors of effect size when entered in to meta-regression as a series of single predictors, but their predictive power diminishes when entered as multiple predictors in the same meta-regression, then the variables may be correlated and potentially confounded.

A range of options have been posited for dealing with confounding once it has been detected. Basic options include meta-regression, subgroup analysis, and random-effects models, which are discussed in most meta-analysis texts (e.g. Borenstein et al. 2009 ; Koricheva et al. 2013 ; Sutton et al. 2000 ). More complex and statistically demanding options include network meta-analysis (Lumley 2002 ), Bayesian meta-analysis (Sutton & Abrams, 2001 ), and individual level data meta-analysis (Riley et al. 2010 ). The choice of which approach to use will depend on the amount and quality of data available, the degree and nature of confounding, the aims and scope of the research question, and the capabilities of the review team. Ultimately, some datasets may be deemed inappropriate for meta-analysis, but experts recommend attempting to correct for heterogeneity before abandoning meta-analysis altogether (Ioannidis et al. 2008 ). In particular, for research areas where the number of studies is limited by time (e.g., longitudinal research) or resources (e.g., large scale social interventions), this issue is likely to arise and may require more attention from specialists in meta-analysis methods.

Conclusions

This paper has summarised guidance from a wide range of sources on how to deal with four issues that may be encountered in systematic reviews and meta-analysis of applied social science research. A review of methods literature from statistics, ecology, psychology, epidemiology, and criminology, has compiled a set of resources for the consideration of researchers who may encounter these issues in the process of undertaking their own systematic review and meta-analysis.

One way that reviewers can address these issues broadly is to ensure that the review team or advisory group includes members from multiple disciplines. A helpful advisory group, perhaps in addition to the core systematic review team, may include at least one methods expert and at least one expert on each substantive area that may have a bearing on the review question. The advisory group can be consulted at each stage of the research, in particular, in scoping the review question, and in any methodological decision making.

Experienced reviewers encountering one or more of these issues may be tempted to dismiss the entire research question as unanswerable with current methods. Indeed, in many situations this reaction may be perfectly appropriate. However, other experts may argue that in some situations, imperfect synthesis is better than none at all, particularly when a review is requested for the purposes of policy guidance. This paper is intended as a resource to direct applied researchers to possible resolutions for practical issues that may be encountered when attempting to use meta-analysis to address unanswered questions in their field. It is also intended for researchers attempting meta-analysis for the first time, who may attempt to address these issues with ad hoc resolutions if they are unaware of where to look for other methodological guidance. We therefore call for more attention to these issues from methodology experts, and more communication between applied researchers who have previously addressed these issues within their own discipline.

American Psychological Association: The Publication Manual of the American Psychological Association . American Psychological Association, Washington, DC; 2010.

Google Scholar  

Arksey H, O’Malley L: Scoping studies: Towards a methodological framework. Int J Soc Res Methodol 2005, 8: 19-32. doi:10.1080/1364557032000119616

Article   Google Scholar  

Badyaev AV, Hill GE: Evolution of sexual dichromatism: Contribution of carotenoid- versus melanin-based coloration. Biol J Linnaean Soc 2000, 69: 153-172. doi:10.1006/bij1.1999.0350

Baunach PJ: Random assignment in criminal justice research: Some ethical and legal issues. Criminology 1980, 17: 435-444. doi:10.1111/j.1745-9125.1980.tb01307.x

Becker BJ: Multivariate meta-analysis. In Handbook of Applied Multivariate Statistics . Edited by: Tinsley HEA, Brown S. San Diego: Academic Press; 2000.

Becker BJ, Hedges LV, Pigott TD: Statistical Analysis Policy Brief Prepared for the Campbell Collaboration Steering Committee . The Campbell Collaboration; 2004. Retrieved from www.campbellcollaboration.org

Boily M-C, Baggaley RF, Wang L, Masse B, White RG, Hayes RJ, Alary M: Heterosexual risk of HIV-1 infection per sexual act: Systematic review and meta-analysis of observational studies. Lancet Infect Dis 2009, 9: 118-129. doi:10.1016/S1473-3099(09)70021-0

Borenstein M, Hedges LV, Higgins JPT, Rothstein H: Introduction to Meta-Analysis . Wiley, Chichester, U.K; 2009.

Book   Google Scholar  

Des Jarlais DC, Lyles C, Crepaz N, the TREND Group: Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. Am J Public Health 2004, 94: 361-366. doi:10.2105/AJPH.94.3.361

Dixon-Woods M, Cavers D, Agarwal S, Annandale E, Arthur A, Harvey J, Sutton AJ: Conducting a critical interpretive synthesis of the literature on access to healthcare by vulnerable groups. BMC Med Res Methodol 2006, 6: 35. doi:10.1186/1471-2288-6-35

Durlak JA, Weissberg RP, Dymnicki AB, Taylor RD, Schellinger KB: The impact of enhancing students’ social and emotional learning: A meta-analysis of school-based universal interventions. Child Dev 2011, 82: 405-432. doi:10.1111/j.1467-8624.2010.01564.x

Enders CK: Applied Missing Data Analysis . Guilford Press, New York; 2010.

Glass GV: Primary, secondary, and meta-analysis of research. Educ Res 1976, 5: 3-8.

Grann VR, Jacobson JS, Thomason D, Hershman D, Heitjan DF, Neugut AI: Effect of prevention strategies on survival and quality-adjusted survival of women with BRCA1/2 mutations: An updated decision analysis. J Clin Oncol 2002, 20: 2520-2529. doi:10.1200/JCO.2002.10.101

Hall PA: Neighborhood Watch and Participant Perceptions (Unpublished doctoral dissertation) . University of Southern California, California; 1987.

Harden A, Thomas J: Methodological issues in combining diverse study types in systematic reviews. Int J Soc Res Methodol 2005, 8: 257-271. doi:10.1080/13645570500155078

Hedges LV, Laine RD, Greenwald R: An exchange: Part I: Does money matter? A meta-analysis of studies of the effects of differential school inputs on student outcomes. Educ Res 1994, 23: 5-24.

Higgins JPT, Green S (Eds): In Cochrane Handbook for Systematic Reviews of Interventions. The Cochrane Collaboration . 2011. Retrieved from www.cochrane-handbook.org

Higginson A, Mazerolle L, Davis J, Bedford L, Mengersen K: Protocol for a Systematic Review: Community-Oriented Policing’s Impact on interpersonal Violent Crime in Developing Countries. The Campbell Library of Systematic Reviews . 2013, 05-02.

Ioannidis J, Patsopoulos N, Rothstein H: Reasons or excuses for avoiding meta-analysis in forest plots. BMJ 2008, 336: 1413-1415. doi:10.1136/bmj.a117

Johansen HK, Gotzsche PC: Problems in the design and reporting of antifungal agents encountered during a meta-analysis. J Am Med Assoc 1999, 282: 1752-1759. doi:10.1001/jama.282.18.1752

Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG: Improving bioscience research reporting: The ARRIVE guidelines for reporting animal research. PLoS Biol 2010, 8: e1000412. doi:10.1371/journal.pbio.1000412

Koricheva J, Gurevitch J, Mengersen K: Handbook of Meta-Analysis in Ecology and Evolution . Press, Princeton University; 2013.

Lipsey MW: Those confounded moderators in meta-analysis: Good, bad, and ugly. Ann Am Acad Pol Soc Sci 2003, 587: 69-81. doi:10.1177/0002716202250791

Lipsey MW, Wilson DB: Practical Meta-Analysis . Sage, Thousand Oaks, CA; 2001.

Lipsey MW, Petrie C, Weisburd D, Gottfredson D: Improving evaluation of anti-crime programs: Summary of a National Research Council report. J Exp Criminol 2006, 2: 271-307. doi:10.1007/s11292-006-9009-6

Littell JH: Lessons from a systematic review of effects of multisystemic therapy. Child Youth Serv Rev 2004, 27: 445-463. doi:10.1016/j.childyouth.2004.11.009

Littell JH, Corcoran J, Pillai V: Systematic Reviews and Meta-Analysis . Oxford University Press, Oxford, UK; 2008.

Lumley T: Network meta-analysis for indirect treatment comparisons. Stat Med 2002, 21: 2313-2324.

Mason J, Drummond M: Reporting guidelines for economic studies. Health Econ 1995, 4: 85-94. doi:10.1002/hec.4730040202

Mazerolle L, Bennett S, Davis J, Sargeant E, Manning M: Legitimacy in policing: A systematic review. Campbell Syst Rev 2013., 9: doi:10.4073/csr.2013.1

Murphy K, Hinds L, Fleming J: Encouraging public cooperation and support for police. Polic Soc 2008, 18: 136-155. doi:10.1080/10439460802008660

Osenberg CW, Sarnell O, Cooper SD, Holt RD: Resolving ecological questions through meta-analysis: Goals, metrics, and models. Ecology 1999, 80: 1105-1117. doi: 10.1890/0012-9658(1999)080[1105:REQTMA]2.0.CO;2

Ren L, Cao L, Lovrich N, Gaffney M: Linking confidence in the police with the performance of the police: Community policing can make a difference. J Crim Justice 2005, 33: 55-66. doi:10.1016/j.jcrimjus.2004.10.003

Riley RD: Multivariate meta-analysis: The effect of ignoring within-study correlation. J Royal Stat Soc: Series A (Statistics in Society) 2009, 172: 789-811. doi:10.1111/j.1467-985X.2008.00593.x

Riley RD, Lambert C, Abo-Zaid G: Meta-analysis of individual patient data: Rationale, conduct, and reporting. BMJ 2010, 340: 221. doi:10.1136/bmj.c221

Sackett DL, Strauss SE, Richardson WS, Rosenberg W, Haynes RB: Evidence-Based Medicine: How to Practice and Teach EBM . 2nd edition. Churchill Livingstone, Edinburgh; 2000.

Sampson RJ: Gold standard myths: Observations on the experimental turn in quantitative criminology. J Quant Criminol 2010, 26: 489-500. doi:10.1007/s10940-010-9117-3

Schulz KF, Altman DG, Moher D, the CONSORT Group: CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. Trials BMC Med 2010, 8: 18.

Schulze R: Current methods for meta-analysis: Approaches, issues, and developments. Zeitschrift fur Psychologie/J Psychol 2007, 215: 90-103.

Shadish WR, Myers D: Research Design Policy Brief for the Campbell Collaboration Steering Committee . The Campbell Collaboration; 2004. Retrieved from www.campbellcollaboration.org

Shadish WR, Cook TD, Campbell DT: Experimental and Quasi-Experimental Designs for Generalized Causal Inference . Houghton Mifflin, Boston, MA; 2002.

Sherman LW, Strang H, Barnes GC, Braithwaite J, Inkpen N, Teh M: Experiments in restorative policing: A progress report on the Canberra Reintegrative Shaming Experiments (RISE) . Australian Federal Police and Australian National University, Canberra, Australia; 1998.

Simera I, Moher D, Hoey J, Schulz KF, Altman DG: A catalogue of reporting guidelines for health research. Eur J Clin Investig 2010, 40: 35-53. doi:10.1111/j.1365-2362.2009.02234.x

Skogan WG, Steiner L: CAPS at Ten: An Evaluation of Chicago’s Alternative Policing Strategy . Illinois Criminal Justice Information Authority, Chicago, IL; 2004.

Slavin RE: Best-evidence synthesis: Why less is more. Educ Res 1987, 16: 15.

Stewart R, van Rooyen C, de Wet T: Purity or pragmatism? Reflecting on the use of systematic review methodology in development. J Dev Effectiveness 2012, 4: 430-444. doi:10.1080/19439342.2012.711341

Stone AA, Shiffman S: Capturing momentary, self-report data: A proposal for reporting guidelines. Ann Behav Med 2002, 24: 236-243. doi:10.1207/S15324796ABM2403_09

Sutton AJ, Abrams KR: Bayesian methods in meta-analysis and evidence synthesis. Stat Methods Med Res 2001, 10: 277-303. doi:10.1191/096228001678227794

Sutton AJ, Abrams KR, Jones DR, Sheldon TA, Song F: Methods for Meta-Analysis in Medical Research . Wiley, New York; 2000.

Taylor CF, Field D, Sansone S-A, Aerts J, Apweiler R, Ashburner M, Wiemann S: Promoting coherent minimum reporting guidelines for biological and biomedical investigations: The MIBBI project. Nat Biotechnol 2008, 26: 889-896. doi:10.1038/nbt.1411

Thomas JT, Harden A: Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol 2008, 8: 45. doi:10.1186/1471-2288-8-45

Viechtbauer W: Accounting for heterogeneity via random-effects models and moderator analyses in meta-analysis. Zeitschrift fur Psychologie/J Psychol 2007, 215: 104-121.

Von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP: The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: Guidelines for reporting observational studies. Bull World Health Organisation 2007, 85: 867-872.

Waddington H, White H, Snilstveit B, Hombrados JG, Vojtkova M, Davies P, Bhavsar A, Eyers J, Koehlmoos TP, Petticrew M, Valentine JC, Tugwell P: How to do a good systematic review of effects in international development: A tool kit. J Dev Effect 2012, 4: 359-387. doi:10.1080/19439342.2012.711765

Download references

Acknowledgements

This research was supported by a grant from the United Kingdom National Policing Improvement Agency and George Mason University.

Author information

Authors and affiliations.

Institute for Social Science Research, University of Queensland, Brisbane, St Lucia, 4072, Australia

Jacqueline Davis, Sarah Bennett & Lorraine Mazerolle

School of Mathematical Sciences, Queensland University of Technology, GPO Box 2434, Brisbane, 4001, Australia

Kerrie Mengersen

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kerrie Mengersen .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors’ contributions

JD, SB, LM conducted the case study. KM provided statistical advice. JD led the quantitative analyses and the write-up of the manuscript. All authors reviewed and approved the paper.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2, rights and permissions.

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Davis, J., Mengersen, K., Bennett, S. et al. Viewing systematic reviews and meta-analysis in social research through different lenses. SpringerPlus 3 , 511 (2014). https://doi.org/10.1186/2193-1801-3-511

Download citation

Received : 18 March 2014

Accepted : 04 September 2014

Published : 10 September 2014

DOI : https://doi.org/10.1186/2193-1801-3-511

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Meta-analysis
  • Heterogeneity
  • Systematic review
  • Missing data

systematic review and meta analysis research design

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Development of insomnia in patients with stroke: A systematic review and meta-analysis

Contributed equally to this work with: Junwei Yang, Juanjuan Xie

Roles Conceptualization

* E-mail: [email protected] (JY); [email protected] (JX)

¶ ‡ JY, AL and QT are share first authorship on this work

Affiliation The First Affiliated Hospital of Guangxi University of Chinese Medicine, Nanning, Guangxi, 530023, China

Roles Data curation

Affiliation Guangxi University of Traditional Chinese Medicine, Nanning, Guangxi, 530001, China

ORCID logo

Roles Data curation, Writing – original draft

Roles Data curation, Methodology

Roles Funding acquisition, Supervision, Writing – review & editing

Roles Project administration, Supervision

  • Junwei Yang, 
  • Aitao Lin, 
  • Qingjing Tan, 
  • Weihua Dou, 
  • Jinyu Wu, 
  • Yang Zhang, 
  • Haohai Lin, 
  • Baoping Wei, 
  • Jiemin Huang, 
  • Juanjuan Xie

PLOS

  • Published: April 10, 2024
  • https://doi.org/10.1371/journal.pone.0297941
  • Peer Review
  • Reader Comments

Fig 1

Background and aim

Stroke is a serious threat to human life and health, and post-stroke insomnia is one of the common complications severely impairing patients’ quality of life and delaying recovery. Early understanding of the relationship between stroke and post-stroke insomnia can provide clinical evidence for preventing and treating post-stroke insomnia. This study was to investigate the prevalence of insomnia in patients with stroke.

The Web of Science, PubMed, Embase, and Cochrane Library databases were used to obtain the eligible studies until June 2023. The quality assessment was performed to extract valid data for meta-analysis.

The prevalence rates were used a random-efect. I 2 statistics were used to assess the heterogeneity of the studies.

  • Twenty-six studies met the inclusion criteria for meta-analysis, with 1,193,659 participants, of which 497,124 were patients with stroke.
  • The meta-analysis indicated that 150,181 patients with stroke developed insomnia during follow-up [46.98%, 95% confidence interval (CI): 36.91–57.18] and 1806 patients with ischemic stroke (IS) or transient ischemic attack (TIA) developed insomnia (47.21%, 95% CI: 34.26–60.36). Notably, 41.51% of patients with the prevalence of nonclassified stroke developed insomnia (95% CI: 28.86–54.75). The incidence of insomnia was significantly higher in patients with acute strokes than in patients with nonacute strokes (59.16% vs 44.07%, P < 0.0001).
  • Similarly, the incidence of insomnia was significantly higher in the patients with stroke at a mean age of ≥65 than patients with stroke at a mean age of <65 years (47.18% vs 40.50%, P < 0.05). Fifteen studies reported the follow-up time. The incidence of insomnia was significantly higher in the follow-up for ≥3 years than follow-up for <3 years (58.06% vs 43.83%, P < 0.05). Twenty-one studies used the Insomnia Assessment Diagnostic Tool, and the rate of insomnia in patients with stroke was 49.31% (95% CI: 38.59–60.06). Five studies used self-reporting, that the rate of insomnia in patients with stroke was 37.58% (95% CI: 13.44–65.63).

Conclusions

Stroke may be a predisposing factor for insomnia. Insomnia is more likely to occur in acute-phase stroke, and the prevalence of insomnia increases with patient age and follow-up time. Further, the rate of insomnia is higher in patients with stroke who use the Insomnia Assessment Diagnostic Tool.

Citation: Yang J, Lin A, Tan Q, Dou W, Wu J, Zhang Y, et al. (2024) Development of insomnia in patients with stroke: A systematic review and meta-analysis. PLoS ONE 19(4): e0297941. https://doi.org/10.1371/journal.pone.0297941

Editor: Tanja Grubić Kezele, University of Rijeka Faculty of Medicine: Sveuciliste u Rijeci Medicinski fakultet, CROATIA

Received: September 6, 2023; Accepted: January 14, 2024; Published: April 10, 2024

Copyright: © 2024 Yang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the manuscript and its Supporting Information files.

Funding: This work was supported by Administration of Traditional Chinese medicine in guangxi, self-financing scientific research subject[grant numbers GXZYA20220072]; Natural Science Foundation of Guangxi[grant numbers 2023GXNSFAA026200]; Hospital scientific research project of the First Affiliated Hospital of Guangxi University of Traditional Chinese Medicine[grant numbers 2021QN008]; Guangxi University of Traditional Chinese Medicine research project[grant numbers 2022QN019]. This work was supported by Junwei YANG and Qingjing TAN.

Competing interests: The authors have declared that no competing interests exist.

1 Introduction

Stroke is the second most morbid and deadly disease globally, which is characterized by high morbidity, disability, mortality, and recurrence. It substantially threatens human life, health, and quality of life [ 1 , 2 ]. Previous study revealed that neuropsychiatric disorders frequently affect stroke survivors, such as insomnia, depression, or anxiety and so on [ 3 ]. Similarly, One third of stroke patients met the diagnostic criteria of insomnia, and patients may experience difficulty falling asleep, difficulty with sleep persistence, and early awakening [ 4 ].

Insomnia is the most common sleep disorder prevalent in people of all ages. In severe cases, it can affect daytime work and life, and even cause emotional disorders [ 5 ]. The incidence of insomnia increases with the increase in social pressure [ 6 ]. Study showed that the incidence of insomnia in stroke patients is higher than the normal healthy population, and some patients with insomnia may be more prone to stroke risk [ 7 ]. As increasing studies showed that insomnia has a bidirectional relationship with stroke, which may be an independent risk factor for stroke. Further, stroke may also be a predisposing factor for insomnia [ 8 ]. Therefore, it is essential to understand the relationship between stroke and post-stroke insomnia in an early stage to provide a clinical basis for the early prevention and treatment of post-stroke insomnia. The study aimed to investigate the prevalence of insomnia in patients with stroke.

2 Research design and method

The study was conducted and designed in strict accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [ 9 , 10 ].

2.1 Data source and selection process

Literature related to the occurrence of developmental insomnia in stroke patients was collected through PubMed, The Cochrane Library, Web of Science, and Embase databases until June 2023.

2.2 Search strategy

We searched the related literature by the subject terms, such as “Stroke”, “Cerebrovascular Accident”, “Insomnia”, “Insomnia Disorder”, etc. The following search strategy for the PubMed database ( Fig 1 ).

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0297941.g001

2.3 Eligibility criteria and study selection

In the study, we included the cohort studies and cross-sectional studies about stroke patients who developed insomnia in English language. Stroke patients met the diagnostic criteria of the Essentials of Diagnosis of Various Cerebrovascular Diseases [ 11 ]. Insomnia patients were diagnosed through recognized assessment tools such as the Pittsburgh Sleep Quality Index (PSQI), Hamilton Depression Scale (HDS), or self-reported symptoms of insomnia and met the diagnostic criteria of the American Academy of Sleep in 2014 [ 12 ]. We excluded the duplicate records, case reports, reviews and so on.

2.4 Exclusion criteria

We excluded the duplicate literature, case reports, reviews and the the literature with incomplete data indicators, or the information was not available.

2.5 Data extraction

2.5.1 literature screening and information extraction..

LAT and TQJ screened the included literatures. The extracted information mainly included the basic information of the literatures: first author name, the time of publication, sample size, the country, the follow-up time and the number of positive cases. In case of disagreement between two researchers in the literature screening or data extraction process, the decision was submitted to the third researcher (YJW).

2.5.2 Literature quality assessment.

The methodological quality of the included studies was assessed using the Critical Appraisal Tool for Prevalence Studies [ 13 , 14 ]. Any disagreements by the researchers were submitted to a third researcher (YJW).

2.6 Statistical analysis

In the study, we used systematic Meta-Analysis software version 3 to calculate the statistical analyse [ 15 ]. The fixed effects model was used in P ≥0.10 and I 2 ≤50%, and random-effects model was used in P <0.10 and/or I 2 >50%, which was necessary to find the source of heterogeneity and perform subgroup analysis or sensitivity analysis [ 16 – 18 ].

3.1 PROSPERO registration

Registration number: CRD42023452419.

3.2 Literature search results

We got 1507 literatures from databases, of which 469 were duplicates and hence excluded. Further, we excluded 927 studies by the exclusion criteria. Overall, 111 studies were retained for the full-text evaluation, and finally 26 studies were included in the meta-analysis ( Fig 2 ) [ 7 , 19 – 43 ].

thumbnail

https://doi.org/10.1371/journal.pone.0297941.g002

3.4 Basic characteristics of the included studies

The 26 included studies (13 prospective cohort studies, 10 cross-sectional studies, 2 retrospective studies, and 1 multicenter observational study) were published between 2002 and 2023. Overall, the 26 studies included had 1,193,659 participants, of which 497,124 were patients with stroke. The details are shown in Table 1 .

thumbnail

https://doi.org/10.1371/journal.pone.0297941.t001

3.5 Quality of included studies

Table 2 shows the quality assessment of the included studies. 69.23% (eighteen studies) studies determined the prevalence of insomnia in stroke patients in a sufficient sample size. To assess insomnia, most studies used standardized instruments or validated diagnostic criteria (80.77%). The details are shown in Table 2 .

thumbnail

https://doi.org/10.1371/journal.pone.0297941.t002

3.6 Meta-analysis

We used the random-effects model to pool prevalence of insomnia in patients with stroke. 150,181 patients with stroke developed insomnia during the follow-up and the pool prevalence was 46.98% (95% CI: 36.91–57.18) ( Fig 3 ).

thumbnail

https://doi.org/10.1371/journal.pone.0297941.g003

Moreover, nine studies examined the occurrence of insomnia in patients with IS or TIA. The result showed that the prevalence of insomnia among patients with IS or TIA was 47.21% (95% CI: 34.26–60.36) ( Fig 4 ).

thumbnail

https://doi.org/10.1371/journal.pone.0297941.g004

Four studies explicitly examined the prevalence of insomnia among IS or hemorrhage patients and the prevalence was 44.09% (95% CI: 19.84–69.92), while twelve studies did not specify the type of stroke ( Fig 5 ).

thumbnail

https://doi.org/10.1371/journal.pone.0297941.g005

Five studies explored the odds of insomnia in patients with acute stroke, and the prevalence was 59.16% (95% CI: 24.18–89.55) ( Fig 6 ). Meanwhile, the odds of insomnia in patients with nonacute stroke was 44.07% in twenty-one studies (95% CI: 34.74–53.61) ( Fig 7 ).

thumbnail

https://doi.org/10.1371/journal.pone.0297941.g006

thumbnail

https://doi.org/10.1371/journal.pone.0297941.g007

In the subgroup analysis, we found that the incidence of insomnia was significantly higher in the patients with stroke at a mean age of ≥65 than patients with stroke at a mean age of <65 years, which was [47.18% (95% CI: 26.7–68.16) vs 40.50% (95% CI: 26.21–55.66), P <0.05] (Figs 8 and 9 ).

thumbnail

https://doi.org/10.1371/journal.pone.0297941.g008

thumbnail

https://doi.org/10.1371/journal.pone.0297941.g009

Moreover, concerning the follow-up duration of the participants, we found that the prevalence of insomnia was significantly higher in the follow-up duration was ≥3 years than those with a follow-up period <3 years (58.06% vs 43.83%, P < 0.001) (Figs 10 and 11 ).

thumbnail

https://doi.org/10.1371/journal.pone.0297941.g010

thumbnail

https://doi.org/10.1371/journal.pone.0297941.g011

In the end, the subgroup analyse was performed based on the use of insomnia assessment diagnostic tools (clinical assessment diagnostic tools vs self-report). Twenty-one studies used insomnia assessment diagnostic tools, and the insomnia rate in stroke patients was 49.31% (95% CI: 38.59–60.06) ( Fig 12 ). Five studies used self-report, and the results indicated that the insomnia rate in stroke patients was 37.58% (95% CI: 13.44–65.63) ( Fig 13 ).

thumbnail

https://doi.org/10.1371/journal.pone.0297941.g012

thumbnail

https://doi.org/10.1371/journal.pone.0297941.g013

4 Discussion

4.1 key findings.

This study was an updated review about the prevalence of insomnia among patients with stroke. Further, 26 studies from 11 countries were included, of which 15 studies were conducted in Asia (57.69%) and the remaining studies were conducted outside Asia. Of the 26 included studies, 21 used diagnostic tools and 5 used nondiagnostic tools for assessing insomnia.

Overall, our meta-analysis indicated that the rate of insomnia after stroke was 48.37%. It was estimated that incidence of IS or TIA (47.21%) was higher than that of unclassified stroke (41.51%); the rate of acute-phase stroke was higher (59.16%) than that of nonacute-phase stroke (36.31%); the proportion of patients with a mean age ≥65 years was higher (47.18%) than the proportion of those with a mean age <65 years (44.43%); the duration of follow-up ≥3 years (58.06%) was higher than the duration of follow-up <3 years (43.83%); and the rate of using a diagnostic tool for insomnia assessment was higher (51.16%) than the rate of using a nondiagnostic tool (37.58%). This suggested that post-stroke insomnia was a substantial global public health problem in patients with stroke who needed urgent attention for prevention and treatment.

4.2 Comparisons of the study findings with the available evidence

Our study found that the rate of insomnia after stroke (48.37%) was 1.27 times higher compared with the prevalence in the meta-analysis by Baylan et al. in 2019 (38.2%) [ 44 ]. It indicated that the prevalence of post-stroke insomnia continued to increase yearly, and insomnia had a significant negative impact on patients. The data in this study indicated that sleep-related apnea was significantly associated with stroke, and obstructive apnea syndrome might increase the risk of stroke twice [ 1 ]. A 4-year follow-up study in Taiwan, China revealed that compared with patients without insomnia, the incidence of stroke was significantly higher in insomnia patients [ 45 ]. A similar meta-analysis showed that sleep duration was also associated with the risk of stroke, with a 5%–7% increase in stroke risk for every 1-h decrease in short sleep duration (RR = 1.05–1.07, 95% CI: 1.01–1.12) [ 46 , 47 ].

Insomnia after stroke is associated with the acute or chronic phase of stroke. In this study, we found that the rate of insomnia was higher in the acute phase of stroke (59.16%) than in the nonacute phase of stroke (36.31%). Luisa et al. found that polysomnography in acute IS patients showed poorer sleep quality was associated with sleep efficiency, sleep-onset awakening time in stroke patients [ 48 ]. Several factors usually caused insomnia in patients with stroke. Insomnia in patients with acute stroke was found to be associated with an increased risk of post-stroke psychiatric disorders [ 49 ].

Moreover, the age of patients with stroke and the duration of follow-up are also important factors influencing the rate of insomnia in patients with stroke. In the general population, insomnia may increase with age [ 50 ]. Studies showed a significantly higher prevalence of insomnia in elderly people [ 51 ]. Nick Glozier et al. found that the prevalence of insomnia was 16% after 6 months of stroke and 23% after 12 months of stroke [ 38 ]. The aforementioned study suggested that older patients with stroke might have an increased likelihood of experiencing insomnia during the follow-up period, and this likelihood seems to grow over time.

Insomnia assessment and diagnostic tool is also one of the factors affecting the rate of insomnia. This study found that the prevalence of insomnia using the Insomnia Assessment Diagnostic Tool was 51.16%, which was higher than the prevalence of self-reported insomnia (37.58%). In contrast, in study using the insomnia assessment and diagnostic tool, the prevalence of insomnia was different in acute phase and subacute phase stroke (32.5% vs 34.8%), whereas the overall prevalence of self-assessed insomnia also was different in acute phase and subacute phase stroke (47.1% vs 50.4%) [ 52 ]. Further large-sample studies are needed to validate these findings.

This study had some limitations. First, the study quality was not an exclusion criterion, which might have contributed to the differences in the prevalence of insomnia after stroke. Studies used different tools for assessing and diagnosing insomnia, which might also have led to biased conclusions. Second, we did not study the treatment of patients with stroke and its effect on the development of insomnia.

5 Conclusions

Stroke may be a predisposing factor for insomnia. Insomnia is more likely to occur in acute-phase stroke, and the prevalence of insomnia increases with patient age and follow-up. Further, the rate of insomnia is higher in patients with stroke who use the Insomnia Assessment Diagnostic Tool.

Supporting information

S1 checklist. prisma 2020 checklist..

https://doi.org/10.1371/journal.pone.0297941.s001

https://doi.org/10.1371/journal.pone.0297941.s002

https://doi.org/10.1371/journal.pone.0297941.s003

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 5. American Academy of Sleep Medicine. International classification of Sleep disorders[M]. 3rd ed. Darien, IL:American Academy of Sleep Medicine, 2014.
  • 12. American Academy of Sleep Medicine. International classification of Sleep disorders[M]. 3rd ed. Darien, IL: American Academy of Sleep Medicine, 2014.
  • 16. Borenstein M, Hedges L, Higgins J, Rothstein H. Comprehensive metaanalysis version 2. Englewood: Biostat; 2005. p. 104.

Approaches for boosting self-confidence of clinical nursing students: A systematic review and meta-analysis

  • Ramezanzade Tabriz, Elahe
  • Sadeghi, Masoumeh
  • Tavana, Ensieh
  • Heidarian Miri, Hamid
  • Heshmati Nabavi, Fatemeh

Background. Self-confidence is a key element in successfully promoting achievement strivings among the healthcare workforce. Targeted interventions can strengthen this characteristic in nursing students, thus improving the quality of hospital services. Objectives. We evaluated the effect of educational interventions on boosting self-confidence in nursing students using systematic review and meta-analysis. Methods. A comprehensive search was used to screen the related studies in Scopus, PubMed, Embase, Web of Science, and PsycINFO. Peer-reviewed literature in English until June 2023 was reviewed. Inclusion criteria were controlled trials, either non-randomized studies of intervention (NRSI) or randomized (RCTs). Studies were assessed for methodological quality by the Cochrane Risk of Bias in Non-randomized Studies of Interventions (ROBINS-I) and the Cochrane "Risk of Bias" tool for RCTs (RoB 2.0) and quality assessment tool for before-after (pre-post) studies with no control group. The main outcome was the self-confidence score of nursing students because of educational methods or intervention/s. Using the inverse variance weights method, a pooled standardized mean difference (SMD) estimate with a corresponding 95% confidence interval (CI) was determined. Random-effects meta-analysis was used to assess conceptual heterogeneity using Stata. Results. Twenty-two studies were selected involving 1758 participants and 940 cases of nursing students in the intervention group on boosting self-confidence (Fourteen Randomized controlled trials, Five Quasi-experimental, and three Before-After studies). The post-intervention self-confidence results in the nursing student's intervention group were significantly greater (SMD) (SMD for Controlled experimental design = 0.51; 95% CI = 0.14-0.89), (SMD for Quasi-experimental = 0.04; 95% CI = -0.33-0.41), (SMD for Before-After (Pre-Post) = 2.74; 95% CI = 1.85-3.63). The random-effect meta-analysis of 22 interventional studies determined that educational interventions are significantly associated with the improving self-confidence of nursing students. The intervention showed a moderate impact on the research units, according to Cohen's d results. Also, the results of simulation learning intervention (SMD = 0.42; 95% CI = 0.03-0.81) showed a significant relationship between intervention and outcome in studies. Conclusions. Analysis of our findings revealed the successful impact of most interventional approaches in boosting self-confidence, especially in the long term. It can be concluded that self-confidence is a multifactorial concept that can be improved by using targeted combination intervention strategies.

  • Self-confidence;
  • Approaches;
  • Meta-analysis

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Sports Phys Ther
  • v.7(5); 2012 Oct

SYSTEMATIC REVIEW AND META‐ANALYSIS: A PRIMER

Franco m. impellizzeri.

1 Department of Research and Development and FIFA Medical Assessment and Research Centre, Schulthess Clinic, Zurich, Switzerland

Mario Bizzini

We would like to thank Kirsten Clift for the English revision of the manuscript.

The use of an evidence‐based approach to practice requires “the integration of best research evidence with clinical expertise and patient values”, where the best evidence can be gathered from randomized controlled trials (RCTs), systematic reviews and meta‐analyses. Furthermore, informed decisions in healthcare and the prompt incorporation of new research findings in routine practice necessitate regular reading, evaluation, and integration of the current knowledge from the primary literature on a given topic. However, given the dramatic increase in published studies, such an approach may become too time consuming and therefore impractical, if not impossible. Therefore, systematic reviews and meta‐analyses can provide the “best evidence” and an unbiased overview of the body of knowledge on a specific topic. In the present article the authors aim to provide a gentle introduction to readers not familiar with systematic reviews and meta‐analyses in order to understand the basic principles and methods behind this type of literature. This article will help practitioners to critically read and interpret systematic reviews and meta‐analyses to appropriately apply the available evidence to their clinical practice.

INTRODUCTION

Sacket et al 1 , 2 defined evidence‐based practice as “the integration of best research evidence with clinical expertise and patient values”. The “best evidence” can be gathered by reading randomized controlled trials (RCTs), systematic reviews, and meta‐analyses. 2 It should be noted that the “best evidence” (e.g. concerning clinical prognosis, or patient experience) may also come from other types of research designs particularly when dealing with topics that are not possible to investigate with RCTs. 3 , 4 From the available evidence, it is possible to provide clinical recommendations using different levels of evidence. 5 Although sometimes a matter of debate, 6 ‐ 8 when properly applied, the evidence‐based approach and therefore meta‐analyses and systematic reviews (highest level of evidence) can help the decision‐making process in different ways: 9

  • 1. Identifying treatments that are not effective;
  • 2. Summarizing the likely magnitude of benefits of effective treatments;
  • 3. Identifying unanticipated risks of apparently effective treatments;
  • 4. Identifying gaps of knowledge;
  • 5. Auditing the quality of existing randomized controlled trials.

The number of scientific articles published in biomedical areas has dramatically increased in the last several decades. Due to the quest for timely and informed decisions in healthcare and medicine, good clinical practice and prompt integration of new research findings into routine practice, clinicians and practitioners should regularly read new literature and compare it with the existing evidence. 10 However, this is time consuming and therefore is impractical if not impossible for practitioners to continuously read, evaluate, and incorporate the current knowledge from the primary literature sources on a given topic. 11 Furthermore, the reader also needs to be able to interpret both the new and the past body of knowledge in relation to the methodological quality of the studies. This makes it even more difficult to use the scientific literature as reference knowledge for clinical decision‐making. For this reason, review articles are important tools available for practitioners to summarize and synthetize the available evidence on a particular topic, 10 in addition to being an integral part of the evidence‐based approach.

International institutions have been created in recent years in an attempt to standardize and update scientific knowledge. The probably best known example is the Cochrane Collaboration, founded in 1993 as an independent, non‐profit organisation, now regrouping more than 28,000 contributors worldwide and producing systematic reviews and meta‐analyses of healthcare interventions. There are currently over 5000 Cochrane Reviews available ( http://www.cochrane.org ). The methodology used to perform systematic reviews and meta‐analyses is crucial. Furthermore, systematic reviews and meta‐analyses have limitations that should be acknowledged and considered. Like any other scientific research, a systematic review with or without meta‐analysis can be performed in a good or bad way. As a consequence, guidelines have been developed and proposed to reduce the risk of drawing misleading conclusions from poorly conducted literature searches and meta‐analyses. 11 ‐ 18

In the present article the authors aim to provide an introduction to readers not familiar with systematic reviews and meta‐analysis in order to help them understand the basics principles and methods behind this kind of literature. A meta‐analysis is not just a statistical tool but qualifies as an actual observational study and hence it must be approached following established research methods involving well‐defined steps. This review should also help practitioners to critically and appropriately read and interpret systematic reviews and meta‐analyses.

NARRATIVE VERSUS SYSTEMATIC REVIEWS

Literature reviews can be classified as “narrative” and “systematic” ( Table 1 ). Narrative reviews were the first form of literature overview allowing practitioners to have a quick synopsis on the current state of science in the topic of interest. When written by experts (usually by invitation) narrative reviews are also called “expert reviews”. However, both narrative or expert reviews are based on a subjective selection of publications through which the reviewer qualitatively addresses a question summarizing the findings of previous studies and drawing a conclusion. 15 As such, albeit offering interesting information for clinicians, they have an obvious author's bias since not performed by following a clear methodology (i.e. the identification of the literature is not transparent). Indeed, narrative and expert reviews typically use literature to support authors' statements but it is not clear whether these statements are evidence‐based or just a personal opinion/experience of the authors. Furthermore, the lack of a specific search strategy increases the risk of failing to identify relevant or key studies on a given topic thus allowing for questions to arise regarding the conclusions made by the authors. 19 Narrative reviews should be considered as opinion pieces or invited commentaries, and therefore they are unreliable sources of information and have a low evidence level. 10 , 11 , 19

Characteristics of narrative and systematic reviews, modified from Physiotherapy Evidence Database.37

By conducting a “systematic review”, the flaws of narrative reviews can be limited or overcome. The term “systematic” refers to the strict approach (clear set of rules) used for identifying relevant studies; 11 , 15 which includes the use of an accurate search strategy in order to identify all studies addressing a specific topic, the establishment of clear inclusion/exclusion criteria and a well‐defined methodological analysis of the selected studies. By conducting a properly performed systematic review, the potential bias in identifying the studies is reduced, thus limiting the possibility of the authors to select the studies arbitrarily considered the most “relevant” for supporting their own opinion or research hypotheses. Systematic reviews are considered to provide the highest level of evidence.

META‐ANALYSIS

A systematic review can be concluded in a qualitative way by discussing, comparing and tabulating the results of the various studies, or by statistically analysing the results from independent studies: therefore conducting a meta‐analysis. Meta‐analysis has been defined by Glass 20 as “the statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings”. By combining individual studies it is possible to provide a ‐single and more precise estimate of the treatment effects. 11 , 21 However, the quantitative synthesis of results from a series of studies is meaningful only if these studies have been identified and collected in a proper and systematic way. Thus, the reason why the systematic review always precedes the meta‐analysis and the two methodologies are commonly used together. Ideally, the combination of individual study results to get a single summary estimate is appropriate when the selected studies are targeted to a common goal, have similar clinical populations, and share the same study design. When the studies are thought to be too different (statistically or clinically), some researchers prefer not to calculate summary estimates. Reasons for not presenting the summary estimates are usually related to study heterogeneity aspects such as clinical diversity (e.g. different metrics or outcomes, participant characteristics, different settings, etc.), methodological diversity (different study designs) and statistical heterogeneity. 22 Some methods, however, are available for dealing with these problems in order to combine the study results. 22 Nevertheless, the source of heterogeneity should be always explored using, for example, sensitivity analyses. In this analysis the primary studies are classified in different groups based on methodological and/or clinical characteristics and subsequently compared. Even after this subgroup analysis the studies included in the groups may still be statistically heterogeneous and therefore the calculation of a single estimate may be questionable. 11 , 19 Statistically heterogeneity can be calculated with different tests but the most popular are the Cochran's Q 23 and I. 23 Although the latter is thought to be more powerful, it has been shown that their performance is similar 24 and these tests are generally weak (low power). Therefore, their confidence intervals should always be presented in meta‐analyses and taken into consideration when interpreting heterogeneity. Although heterogeneity can be seen as a “statistical” problem, it is also an opportunity for obtaining important clinical information about the influences of specific clinical differences. 11 Sometimes, the goal of a meta‐analysis is to explore the source of diversity among studies. 15 In this situation the inclusion criteria are purposely allowed to be broader.

Meta‐analyses of observational studies

Although meta‐analyses usually combine results from RCTs, meta‐analyses of epidemiological studies (case‐control, cross‐sectional or cohort studies) are increasing in the literature, and therefore, guidelines for conducting this type of meta‐analysis have been proposed (e.g. Meta‐analysis Of Observational Studies in Epidemiology, MOOSE 25 ). Although the highest level of evidence study design is the RCT, observational studies are used in situations where RCTs are not possible such as when investigating the potential causes of a rare disease or the prevalence of a condition and other etiological hypotheses. 3 , 4 , 11 The two designs, however, usually address different research questions (e.g. efficacy versus effectiveness) and therefore the inclusion of both RCTs and observational studies in meta‐analyses would not be appropriate. 11 , 15 Major problems of observational studies are the lack of a control group, the difficultly controlling for confounding variables, and the high risk of bias. 26 Nevertheless, observational studies and therefore the meta‐analyses of observational studies can be useful and are an important step in examining the effectiveness of treatments in healthcare. 3 , 4 , 11 For the meta‐analyses of observational studies, sensitivity analyses for exploring the source of heterogeneity is often the main aim. To note, meta‐analyses themselves can be considered “observational studies of the evidence” 11 and, as a consequence, they may be influenced by known and unknown confounders similarly to primary type observational studies.

Meta‐analyses based on individual patient data

While “traditional” meta‐analyses combine aggregate data (average of the study participants such as mean treatment effects, mean age, etc.) for calculating a summary estimate, it is possible (if data are available) to perform meta‐analyses using the individual participant data on which the aggregate data are derived. 27 ‐ 29 Meta‐analyses based on individual participant data are increasing. 28 This kind of meta‐analysis is considered the most comprehensive and has been regarded as the gold standard for systematic reviews. 29 , 30 Of course, it is not possible to simply pool together the participants of various studies as if they come from a large, single trial. The analysis must be stratified by study so that the clustering of patients within the studies is retained for preserving the effects of the randomization used in the primary investigations and avoiding artifacts such as the Simpson's paradox, which is a change of direction of the associations. 11 , 15 , 28 , 29 There are several potential advantages of this kind of meta‐analysis such as consistent data checking, consistent use of inclusion and exclusion criteria, better methods for dealing with missing data, the possibility of performing the same statistical analyses across studies, and a better examination of the effects of participant‐level covariates. 15 , 31 , 32 Unfortunately, meta‐analyses on individual patient data are often difficult to conduct, time consuming, and it is often not easy to obtain the original data needed for performance of a such an analysis.

Cumulative and Bayesian meta‐analyses

Another form of meta‐analysis is the so‐called “cumulative meta‐analysis”. Cumulative meta‐analyses recognize the cumulative nature of scientific evidence and knowledge. 11 In cumulative meta‐analysis a new relevant study on a given topic is added whenever it becomes available. Therefore, a cumulative meta‐analysis shows the pattern of evidence over time and can identify the point when a treatment becomes clinically significant. 11 , 15 , 33 Cumulative meta‐analyses are not updated meta‐analyses since there is not a single pooling but the results are summarized as each new study is added. 33 As a consequence, in the forest plot, commonly used for displaying the effect estimates, the horizontal lines represent the treatment effect estimates as each study is added and not the results of the single studies. The cumulative meta‐analysis should be interpreted within the Bayesian framework even if they differ from the “pure” Bayesian approach for meta‐analysis.

The Bayesian approach differs from the classical, or frequentist methods to meta‐analysis in that data and model parameters are considered to be random quantities and probability is interpreted as an uncertainty rather than a frequency. 11 , 15 , 34 Compared to the frequentist methods, the Bayesian approach incorporates prior distributions, that can be specified based on a priori beliefs (being unknown random quantities), and the evidence coming from the study is described as a likelihood function. 11 , 15 , 34 The combination of prior distribution and likelihood function gives the posterior probability density function. 34 The uncertainty around the posterior effect estimate is defined as a credibility interval, which is the equivalent of the confidence interval in the frequentist approach. 11 , 15 , 34 Although Bayesian meta‐analyses are increasing, they are still less common than traditional (frequentist) meta‐analyses.

Conducting a systematic review and meta‐analysis

As aforementioned, a systematic review must follow well‐defined and established methods. One reference source of practical guidelines for properly apply methodological principles when conducting systematic reviews and meta‐analyses is the Cochrane Handbook for Systematic Reviews of Interventions that is available for free online. 12 However other guidelines and textbooks on systematic reviews and meta‐analysis are available. 11 , 13 , 14 , 15 Similarly, authors of reviews should report the results in a transparent and complete way and for this reason an international group of experts developed and published the QUOROM (Quality Of Reporting Of Meta‐analyses), 16 and recently the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta‐Analyses) 17 guidelines addressing the reporting of systematic reviews and meta‐analyses of studies which evaluate healthcare interventions. 17 , 18

In this section the authors briefly present the principal steps necessary for conducting a systematic review and meta‐analysis, derived from available reference guidelines and textbooks in which all the contents (and much more) of the following section can be found. 11 , 12 , 14 A summary of the steps is presented in Figure 1 . As with any research, the methods are similar to any other study and start with a careful development of the review protocol, which includes the definition of the research question, the collection and analysis of data, and the interpretation of the results. The protocol defines the methods that will be used in the review and should be set out before starting the review in order to avoid bias, and in case of deviation this should be reported and justified in the manuscript.

An external file that holds a picture, illustration, etc.
Object name is ijspt-07-493-f001.jpg

Steps in conducting a systematic review. Modified from 11 , 14

Step 1. Defining the review question and eligibility criteria

The authors should start by formulating a precise research question, which means they should clearly report the objectives of the review and what question they would like to address. If necessary, a broad research question may be divided into more specific questions. According to the PICOS framework, 35 , 36 the question should define the P opulation(s), I ntervention(s), C omparator(s), O utcome(s) and S tudy design(s). This information will also provide the rationale for the inclusion and exclusion criteria for which a background section explaining the context and the key conceptual issues may be also needed. When using terms that may have different interpretations, operational definitions should be provided. An example may be the term “neuromuscular control” which can be interpreted in different ways by different researchers and practitioners. Furthermore, the inclusion criteria should be precise enough to allow the selection of all the studies relevant for answering the research question. In theory, only the best evidence available should be used for the systematic reviews. Unfortunately, the use of an appropriate design (e.g. RCT) does not ensure the study was well‐conducted. However, the use of cut‐offs in quality scores as inclusion criteria is not appropriate given their subjective nature, and a sensitivity analysis comparing all available studies based on some methodological key characteristics is preferable.

Step 2. Searching for studies

The search strategy must be clearly stated and should allow the identification of all the relevant studies. The search strategy is usually based on the PICOS elements and can be conducted using electronic databases, reading the reference lists of relevant studies, hand‐searching journals and conference proceedings, contacting authors, experts in the field and manufacturers, for example.

Currently, it is possible to easily search the literature using electronic databases. However, the use of only one database does not ensure that all the relevant studies will be found and therefore various databases should be searched. The Physiotherapy Evidence Database (PEDro: http://www.pedro.org.au ) provides free access to RCTs (about 18,000) and systematic reviews (almost 4000) on musculoskeletal and orthopaedic physiotherapy (sports being represented by more than 60%). Other available electronic databases are MEDLINE (through PubMed), EMBASE, SCOPUS, CINAHL, Web of Science of the Thomson Reuters and The Cochrane Controlled Trials Register. The necessity of using different databases is justified by the fact that, for example, 1800 journals indexed in MEDLINE are not indexed in EMBASE, and vice versa.

The creation and selection of appropriate keywords and search term lists is important to find the relevant literature, ensuring that the search will be highly sensitive without compromising precision. Therefore, the development of the search strategy is not easy and should be developed carefully taking into consideration the differences between databases and search interfaces. Although Boolean searching (e.g. AND, OR, NOT) and proximity operators (e.g. NEAR, NEXT) are usually available, every database interface has its own search syntax (e.g. different truncation and wildcards) and a different thesaurus for indexing (e.g. MeSH for MEDLINE and EMTREE for EMBASE). Filters already developed for specific topics are also available. For example, PEDro has filters included in search strategies (called SDIs) that are used regularly and automatically in some of the above mentioned databases for retrieving guidelines, RCTs, and systematic reviews. 37

After performing the literature search using electronic databases, however, other search strategies should be adopted such as browsing the reference lists of primary and secondary literature and hand searching journals not indexed. Internet sources such as specialized websites can be also used for retrieving grey literature (e.g. unpublished papers, reports, conference proceedings, thesis or any other publications produced by governments, institutions, associations, universities, etc.). Attempts may be also performed for finding, if any, unpublished studies in order to reduce the risk of publication bias (trend to publish positive results or results going in the same direction). Similarly, the selection of only English‐language studies may exacerbate the bias, since authors may tend to publish more positive findings in international journals and more negative results in local journals. Unpublished and non‐English studies generally have lower quality and their inclusion may also introduce a bias. There is no rule for deciding whether to include or not include unpublished or exclusively English‐language studies. The authors are usually invited to think about the influence of these decisions on the findings and/or explore the effects of their inclusion with a sensitivity analysis.

Step 3. Selecting the studies

The selection of the studies should be conducted by more than one reviewer as this process is quite subjective (the agreement, using kappa statistic, between reviewers should be reported together with the reasons for disagreements). Before selecting the studies, the results of the different searches are merged using reference management software and duplicates deleted. After an initial screening of titles and abstracts where the obviously irrelevant studies are removed, the full papers of potentially relevant studies should be retrieved and are selected based on the previously defined inclusion and exclusion criteria. In case of disagreements, a consensus should be reached by discussion or with the help of a third reviewer. Direct contact with the author(s) of the study may also help in clarifying a decision.

An important phase at this step is the assessment of quality. The use of quality scores for weighting the study entered in the meta‐analysis is not recommended, as it is not recommended to include in a meta‐analysis only studies above a cut‐off quality score. However, the quality criteria of the studies must be considered when interpreting the results of a meta‐analysis. This can be done qualitatively or quantitatively through subgroup and sensitivity analyses based on important methodological aspects, which can be assessed using checklists that are preferable over quality scores. If quality scores would like to be used for weighting, alternative statistical techniques have been proposed. e.g. 38 The assessment of quality should be performed by two independent observers. The Cochrane handbook, however, makes a distinction between study quality and risk of bias (related for example to the method used to generate random allocation, concealment, blindness, etc.), focusing more on the latter. As for quality assessment, the risk of bias should be taken into consideration when interpreting the findings of the meta‐analysis. The quality of a study is generally assessed based on the information reported in the studies thus linking the quality of reporting to the quality of the research itself, which is not necessarily true. Furthermore, a study conducted at the highest possible standard may still have high risk of bias. In both cases, however, it is important that the authors of primary studies appropriately report the results and for this reason guidelines have been created for improving the quality of reporting such as the CONSORT (Consolidated Standards of Reporting Trials 39 ) and the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology 40 ) statements.

Step 4. Data extraction

Data extraction must be accurate and unbiased and therefore, to reduce possible errors, it should be performed by at least two researchers. Standardized data extraction forms should be created, tested, and if necessary modified before implementation. The extraction forms should be designed taking into consideration the research question and the planned analyses. Information extracted can include general information (author, title, type of publication, country of origin, etc.), study characteristics (e.g. aims of the study, design, randomization techniques, etc.), participant characteristics (e.g. age, gender, etc.), intervention and setting, outcome data and results (e.g. statistical techniques, measurement tool, number of follow up, number of participants enrolled, allocated, and included in the analysis, results of the study such as odds ratio, risk ratio, mean difference and confidence intervals, etc.). Disagreements should be noted and resolved by discussing and reaching a consensus. If needed, a third researcher can be involved to resolve the disagreement.

Step 5. Analysis and presentation of the results (data synthesis)

Once the data are extracted, they are combined, analyzed, and presented. This data synthesis can be done quantitatively using statistical techniques (meta‐analysis), or qualitatively using a narrative approach when pooling is not believed to be appropriate. Irrespective of the approach (quantitative or qualitative), the synthesis should start with a descriptive summary (in tabular form) of the included studies. This table usually includes details on study type, interventions, sample sizes, participant characteristics, outcomes, for example. The quality assessment or the risk of bias should also be reported. For narrative reviews a comprehensive synthesis framework ( Figure 2 ) has been proposed. 14 , 41

An external file that holds a picture, illustration, etc.
Object name is ijspt-07-493-f002.jpg

Narrative synthesis framework. Modified from 14 , 41

Standardization of outcomes

To allow comparison between studies the results of the studies should be expressed in a standardized format such as effect sizes. The appropriate effect size for standardizing the outcomes should be similar between studies so that they can be compared and it can be calculated from the data available in the original articles. Furthermore, it should be interpretable. When the outcomes of the primary studies are reported as means and standard deviations, the effect size can be the raw (unstandardized) difference in means (D), the standardized difference in means (d or g) or the response ratio (R). If the results are reported in the studies as binary outcomes the effect sizes can be the risk ratio (RR), the odds ratio (OR) or the risk difference (RD). 15

Statistical analysis

When a quantitative approach is chosen, meta–analytical techniques are used. Textbooks and courses are available for learning statistical meta‐analytical techniques. Once a summary statistic is calculated for each study, a “pooled” effect estimate of the interventions is determined as the weighting average of individual study estimates, so that the larger studies have more “weight” than the small studies. This is necessary because small studies are more affected by the role of chance. 11 , 15 The two main statistical models used for combining the results are the “fixed‐effect” and the “random‐effects” model. Under the fixed effect model, it is assumed that the variability between studies is only due to random variation because there is only one true (common) effect. In other words, it is assumed that the group of studies give an estimate of the same treatment effect and therefore the effects are part of the same distribution. A common method for weighting each study is the inverse‐variance method, where the weight is given by the inverse of variance of each estimate. Therefore, the two essential data required for this calculation are the estimate of the effect with its standard error. On the other hand, the “random‐effects” model assumes a different underlying effect for each study (the true effect varies from study to study). Therefore the study weight will take into account two sources of error: the between‐ and within‐studies variance. As in the fixed‐effect model, the weight is calculated using the inverse‐variance method, but in random‐effects model the study specific standard errors are adjusted incorporating both within and between‐studies variance. For this reason, the confidence intervals obtained with random‐effect models are usually wider. In theory, the fixed‐effect model can be applied when the studies are heterogeneous while the random‐effects model can be applied when the results are not heterogeneous. However, the statistical tests for examining heterogeneity lack power and, as aforementioned, the heterogeneity should be carefully scrutinized (e.g. interpreting the confidence intervals) before taking a decision. Sometimes, both fixed‐ and random‐effects models are used for examining the robustness of the analysis. Once the analyses are completed, results should be presented as point estimates with the corresponding confidence intervals and exact p ‐values.

Other than the calculations of the individual studies and summary estimates, other analyses are necessary. As mentioned various time, the exploration of possible source of heterogeneity is important and can be performed using sensitivity, subgroup, or regression analyses. Using meta‐regressions is also possible to examine the effects of differences in study characteristics on the treatment effect estimate. When using meta‐regression, the larger studies have more influence than smaller studies; and regarding other analyses, recall that the limitations should be taken into account before deciding to use it and when interpreting the results.

Graphic display

The results of each trial are commonly displayed with their corresponding confidence intervals in the so‐called “forest plot” ( Figure 3 ). In the forest plot the study is represented by a square and a horizontal line indicating the confidence interval, where the dimension of the square reflects the weight of each study. A solid vertical line usually corresponds to no effect of treatment. The summary point estimate is usually represented with a diamond at the bottom of the graph with the horizontal extremities indicating the confidence interval. This graphic solution gives an immediate overview of the results.

An external file that holds a picture, illustration, etc.
Object name is ijspt-07-493-f003.jpg

Example of a forest plot: the squares represent the effect estimate of the individual studies and the horizontal lines indicate the confidence interval; the dimension of the square reflects the weight of each study. The diamond represent the summary point estimate is usually represented with a diamond at the bottom of the graph with the horizontal extremities indicating the confidence interval. In the example as standardized outcome measure the authors used d.

An alternated graphic solution called a funnel plot can be used for investigating the effects of small studies and for identifying publication bias ( Figure 4 ). The funnel plot is a scatter‐plot of the effect estimates of individual studies against measures of study size and precision (commonly, the standard error, but the use of sample size is still common). If there is no publication bias the funnel plot will be symmetrical ( Figure 4B ). However, the funnel plot examination is subjective, based upon visual inspection, and therefore can be unreliable. In addition, other causes may influence the symmetry of the funnel plot such as the measures used for estimating the effects and precision, and differences between small and large studies. 14 Therefore, its use and interpretation should be done with caution.

An external file that holds a picture, illustration, etc.
Object name is ijspt-07-493-f004.jpg

Example of symmetric (A) and asymmetric (B) funnel plots.

Step 6. Interpretation of the results

The final part of the process pertains to the interpretation of the results. When interpreting or commenting on the findings, the limitations should be discussed and taken into account, such as the overall risk of bias and the specific biases of the studies included in the systematic review, and the strength of the evidence. Furthermore, the interpretation should be performed based not solely using P ‐values, but rather on the uncertainty and the clinical/practical importance. Ideally, the interpretation should help the clinician in understanding how to apply the findings in practice, provide recommendations or implications for policies, and offer directions for further research.

CONCLUSIONS

Systematic reviews have to meet high methodological standards, and their results should be translated into clinically relevant information. These studies offer a valuable and useful summary of the current scientific evidence on a specific topic and can be used for developing evidence‐based guidelines. However, it is important that practitioners are able to understand the basic principles behind the reviews and are hence able to appreciate their methodological quality before using them as a source of knowledge. Furthermore, there are no RCTs, systematic reviews, or meta‐analyses that address all aspects of the wide variety of clinical situations. A typical example in sports physiotherapy is that most available studies deal with recreational athletes, while an individual clinician may work with high‐profile or elite athletes in the clinic. Therefore, when applying the results of a systematic review to clinical situations and individual patients, there are various aspects one should consider such as the applicability of the findings to the individual patient, the feasibility in a particular setting, the benefit‐risk ratio, and the patient's values and preferences. 1 As reported in the definition, evidence‐based medicine is the integration of both research evidence and clinical expertise. As such, the experience of the sports PT should help in contextualizing and applying the findings of a systematic review or meta‐analysis, and adjusting the effects to the individual patient. As an example, an elite athlete is often more motivated and compliant in rehabilitation, and may have a better outcome than average with the given physical therapy or training interventions (when compared to a recreational athlete). Therefore, it is essential to merge the available evidence with the clinical evaluation and the patient's wishes (and consequent treatment planning) in order to engage an evidence‐based management of the patient or athlete.

Prevalence and risk factors of early postoperative seizures in patients with glioma: A protocol for meta-analysis and systematic review

Affiliations.

  • 1 The Second Hospital of Hebei Medical University, Shijiazhuang, Hebei, China.
  • 2 Department of Neurosurgery, The Second Affiliated Hospital of Hainan Medical University, Haikou, China.
  • PMID: 38574171
  • PMCID: PMC10994364
  • DOI: 10.1371/journal.pone.0301443

Introduction: Early postoperative seizures has been the most common clinical expression in gliomas; however, the incidence and risk factors for early postoperative seizures in gliomas are more controversial. This protocol describes a systematic review and meta-analysis to clarify the prevalence and risk factors of early postoperative seizures in patients with glioma.

Methods and analysis: Searches will be conducted on CNKI, WanFang, VIP, PubMed, Embase, Cochrane Library databases and Web of Science for the period from database inception to December 31st, 2023. Case-control and cohort studies of the incidence and risk factors for early postoperative seizures in all gliomas will be included. The primary outcome will be incidence, risk factors. Newcastle-Ottawa Scale was used for quality evaluation. Review of article screening, extracting data and risk of bias assessment will be repeated by two independent reviewers.

Result: This study will provide evidence for the risk factors and incidence of early postoperative seizures in patients with glioma.

Conclusion: Our study will provide evidence for the prevention of early postoperative seizures in glioma patients.

Trail registration: This protocol was registered in PROSPERO and registration number is CRD42023415658.

Copyright: © 2024 Sun et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

  • Glioma* / complications
  • Glioma* / surgery
  • Meta-Analysis as Topic
  • Research Design
  • Risk Factors
  • Seizures* / epidemiology
  • Seizures* / etiology
  • Systematic Reviews as Topic

Grants and funding

IMAGES

  1. The Methodology Behind Systematic Review and Meta-Analysis

    systematic review and meta analysis research design

  2. Τμήμα Οικονομικών Επιστημών

    systematic review and meta analysis research design

  3. What is a Meta-Analysis? The benefits and challenges

    systematic review and meta analysis research design

  4. Systematic Reviews and Meta-Analysis: A Campbell Collaboration Online Course

    systematic review and meta analysis research design

  5. Systematic Review & Meta analysis -- XMind Online Library

    systematic review and meta analysis research design

  6. Types of Study Designs in Health Research: The Evidence Hierarchy

    systematic review and meta analysis research design

VIDEO

  1. Statistical Procedure in Meta-Essentials

  2. Systematic Review & Meta Analysis: Dr. Ahmed Yaseen Alqutaibi

  3. Episode 41: Register PROSPERO for Sytematic Review & Meta-Analysis #part4

  4. Systematic review & Meta-analysis workshop

  5. محاضرة Introduction to Systematic Review & Meta Analysis

  6. Systematic Reviews In Research Universe

COMMENTS

  1. Introduction to systematic review and meta-analysis

    A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ 1 ]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality. A meta-analysis is a valid, objective ...

  2. A step by step guide for conducting a systematic review and meta

    SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type) was proposed as a method for qualitative and mixed methods search. ... Systematic review/meta-analysis steps include development of research question and its validation, forming criteria, search strategy, searching databases, importing all results to a library and ...

  3. Getting Started

    It may take several weeks to complete and run a search. Moreover, all guidelines for carrying out systematic reviews recommend that at least two subject experts screen the studies identified in the search. The first round of screening can consume 1 hour per screener for every 100-200 records. A systematic review is a labor-intensive team effort.

  4. Study designs: Part 7

    In this segment, we discuss systematic review, which is a study design used to summarize the results of several primary research studies. Systematic reviews often also use meta-analysis, which is a statistical tool to mathematically collate the results of various research studies to obtain a pooled estimate of treatment effect; this will be ...

  5. A 24-step guide on how to design, conduct, and successfully ...

    Here we provide a concise, 24-step guide on how to perform a systematic review and meta-analysis. Aim and scope. We present a concise and comprehensive practical guide and a checklist with 24 steps that can help biomedical researchers conduct a systematic review and meta-analysis (Fig. 1). This guide (1) simplifies the methodology of a ...

  6. A 24-step guide on how to design, conduct, and successfully ...

    To facilitate the design and development of evidence syntheses, we provide a clear and concise, 24-step guide on how to perform a systematic review and meta-analysis of observational studies and clinical trials. We describe each step, illustrate it with concrete examples, and provide relevant references for further guidance.

  7. How to conduct a meta-analysis in eight steps: a practical guide

    2.1 Step 1: defining the research question. The first step in conducting a meta-analysis, as with any other empirical study, is the definition of the research question. Most importantly, the research question determines the realm of constructs to be considered or the type of interventions whose effects shall be analyzed.

  8. Systematic Review

    Systematic review vs. meta-analysis. Systematic reviews often quantitatively synthesize the evidence using a meta-analysis. A meta-analysis is a statistical analysis, not a type of review. ... Type of study design(s) Example: Formulate a research question (PICO) Boyle and colleagues were interested in: The population of patients with eczema;

  9. Principles of Systematic Reviews and Meta-analyses

    In this chapter, we summarize the key principles involved in designing and conducting a rigorous systematic review focused on an intervention question. We provide key definitions on what systematic reviews and meta-analysis are and how they differ from other types of reviews. We cover the principles for designing a good systematic review ...

  10. The PRISMA 2020 statement: an updated guideline for reporting ...

    The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline.

  11. Systematic reviews vs meta-analysis: what's the difference?

    A systematic review is an article that synthesizes available evidence on a certain topic utilizing a specific research question, pre-specified eligibility criteria for including articles, and a systematic method for its production. Whereas a meta-analysis is a quantitative, epidemiological study design used to assess the results of articles ...

  12. A Simple Guide to Systematic Reviews and Meta-Analyses

    This simple guide outlines the key principles regarding the design, conduct and interpretation of systematic reviews and meta-analyses. ... A systematic review and meta-analysis of nutritional supplementation in chronic lower extremity wounds. Int J Low Extrem Wounds. 2016;15(4):296-302. ... Clinical Research and Evidence-Based Medicine Unit ...

  13. Systematic Reviews and Meta-Analysis: A Guide for Beginners

    Systematic reviews involve the application of scientific methods to reduce bias in review of literature. The key components of a systematic review are a well-defined research question, comprehensive literature search to identify all studies that potentially address the question, systematic assembly of the studies that answer the question, critical appraisal of the methodological quality of the ...

  14. Research Guides: Study Design 101: Meta-Analysis

    Meta-analysis would be used for the following purposes: To establish statistical significance with studies that have conflicting results. To develop a more correct estimate of effect magnitude. To provide a more complex analysis of harms, safety data, and benefits. To examine subgroups with individual numbers that are not statistically significant.

  15. Systematic Reviews and Meta-Analysis: A Guide for Beginners

    The graphical output of meta-analysis is a forest plot which provides information on individual studies and the pooled effect. Systematic reviews of literature can be undertaken for all types of questions, and all types of study designs. This article highlights the key features of systematic reviews, and is designed to help readers understand ...

  16. Systematic Review and Meta-Analysis

    Within the paradigm of evidence-based orthopedics, systematic reviews and meta-analyses top the hierarchy of evidence. A systematic review summarizes available literature of a specific research question, and meta-analysis applies statistical methods to combine results from two or more studies. Systematic reviews are increasingly published in ...

  17. A step by step guide for conducting a systematic review and meta

    Detailed steps for conducting any systematic review and meta-analysis. We searched the methods reported in published SR/MA in tropical medicine and other healthcare fields besides the published guidelines like Cochrane guidelines {Higgins, 2011 #7} [] to collect the best low-bias method for each step of SR/MA conduction steps.Furthermore, we used guidelines that we apply in studies for all SR ...

  18. Viewing systematic reviews and meta-analysis in social research through

    Issue 1. Scoping and targeting the review. Meta-analysis having been defined as a "gold standard" for evidence-based practice in medicine (Sackett et al.2000), and the increasing number of meta-analyses on a wide variety of topics, may give the impression that meta-analysis is the best technique to answer any research question in any field.Meta-analysis is, however, a specific statistical ...

  19. How to review and assess a systematic review and meta-analysis article

    A systematic review refers to a review of a research question that uses explicit and systematic methods to identify, select, and critically appraise relevant research. In contrast, a meta-analysis is a quantitative statistical analysis that combines individual results on the same research question to estimate the common or mean effect.

  20. Digital Games, Design, and Learning: A Systematic Review and Meta-Analysis

    The current meta-analysis extends and refines the findings of three recent meta-analyses relevant to the impact of games on learning. 1 We first provide an overview of these three relevant meta-analyses to frame the relationships, contributions, and research questions of the current meta-analysis. The first meta-analysis, by Vogel et al. (2006), synthesized results from 32 studies from 1986 to ...

  21. Systematic Reviews are Rarely Used to Inform Study Design

    Objective: Our aim was to identify and synthesize the results from meta-research studies to determine whether and how authors of original studies in clinical health research use systematic reviews when designing new studies. Study design and setting: For this systematic review, we searched MEDLINE (OVID), Embase (OVID) and the Cochrane Methodology Register.

  22. Development of insomnia in patients with stroke: A systematic review

    A similar meta-analysis showed that sleep duration was also associated with the risk of stroke, with a 5%-7% increase in stroke risk for every 1-h decrease in short sleep duration (RR = 1.05-1.07, 95% CI: 1.01-1.12) [ 46, 47 ]. Insomnia after stroke is associated with the acute or chronic phase of stroke.

  23. Prevalence of NANDA-I Nursing Diagnoses in patients with heart failure

    AIM This research quantitatively explored the prevalence of NANDA-I nursing diagnoses related to the care of patients experiencing heart failure. DESIGN A systematic review and meta-analysis were conducted with the systematic review protocol registered in PROSPERO (registration number: CRD42022382565). METHODS Systematic searches were performed between March and April 2022, including peer ...

  24. Approaches for boosting self-confidence of clinical nursing students: A

    Background. Self-confidence is a key element in successfully promoting achievement strivings among the healthcare workforce. Targeted interventions can strengthen this characteristic in nursing students, thus improving the quality of hospital services. Objectives. We evaluated the effect of educational interventions on boosting self-confidence in nursing students using systematic review and ...

  25. Systematic Reviews and Meta-analysis: Understanding the Best Evidence

    A systematic review is an overview of primary studies which contains an explicit statement of objectives, materials, and methods, and has been conducted according to explicit and reproducible methodology. A meta-analysis is a mathematical synthesis of the results of two or more primary studies that addressed the same hypothesis in the same way.

  26. Meta-Analysis

    2. One potential design pitfall of Meta-Analyses that is important to pay attention to is: a) Whether it is evidence-based.b) If the authors combined studies with conflicting results.c) If the authors appropriately combined studies so they did not compare apples and oranges.d) If the authors used only quantitative data.

  27. Effects of Age and Noise Exposure History on Auditory Nerve ...

    Footnotes. 1. Clarifying language has been added throughout the manuscript to 1) justify metrics for hearing level in our statistical analyses, 2) explain why certain studies were not included in our meta-analyses, and 3) specify that the average effects computed across studies in our meta-analyses are the mean effects of age and noise exposure history on auditory nerve response amplitudes.

  28. Systematic Review and Meta‐Analysis: a Primer

    Sacket et al 1, 2 defined evidence‐based practice as "the integration of best research evidence with clinical expertise and patient values". The "best evidence" can be gathered by reading randomized controlled trials (RCTs), systematic reviews, and meta‐analyses. 2 It should be noted that the "best evidence" (e.g. concerning ...

  29. Prevalence and risk factors of early postoperative seizures in ...

    Introduction: Early postoperative seizures has been the most common clinical expression in gliomas; however, the incidence and risk factors for early postoperative seizures in gliomas are more controversial. This protocol describes a systematic review and meta-analysis to clarify the prevalence and risk factors of early postoperative seizures in patients with glioma.