• Skip to main content
  • Skip to FDA Search
  • Skip to in this section menu
  • Skip to footer links

U.S. flag

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

U.S. Food and Drug Administration

  •   Search
  •   Menu
  • For Patients
  • Clinical Trials: What Patients Need to Know

What Are the Different Types of Clinical Research?

Different types of clinical research are used depending on what the researchers are studying. Below are descriptions of some different kinds of clinical research.

Treatment Research generally involves an intervention such as medication, psychotherapy, new devices, or new approaches to surgery or radiation therapy. 

Prevention Research looks for better ways to prevent disorders from developing or returning. Different kinds of prevention research may study medicines, vitamins, vaccines, minerals, or lifestyle changes. 

Diagnostic Research refers to the practice of looking for better ways to identify a particular disorder or condition. 

Screening Research aims to find the best ways to detect certain disorders or health conditions. 

Quality of Life Research explores ways to improve comfort and the quality of life for individuals with a chronic illness. 

Genetic studies aim to improve the prediction of disorders by identifying and understanding how genes and illnesses may be related. Research in this area may explore ways in which a person’s genes make him or her more or less likely to develop a disorder. This may lead to development of tailor-made treatments based on a patient’s genetic make-up. 

Epidemiological studies seek to identify the patterns, causes, and control of disorders in groups of people. 

An important note: some clinical research is “outpatient,” meaning that participants do not stay overnight at the hospital. Some is “inpatient,” meaning that participants will need to stay for at least one night in the hospital or research center. Be sure to ask the researchers what their study requires. 

Phases of clinical trials: when clinical research is used to evaluate medications and devices Clinical trials are a kind of clinical research designed to evaluate and test new interventions such as psychotherapy or medications. Clinical trials are often conducted in four phases. The trials at each phase have a different purpose and help scientists answer different questions. 

Phase I trials Researchers test an experimental drug or treatment in a small group of people for the first time. The researchers evaluate the treatment’s safety, determine a safe dosage range, and identify side effects. 

Phase II trials The experimental drug or treatment is given to a larger group of people to see if it is effective and to further evaluate its safety.

Phase III trials The experimental study drug or treatment is given to large groups of people. Researchers confirm its effectiveness, monitor side effects, compare it to commonly used treatments, and collect information that will allow the experimental drug or treatment to be used safely. 

Phase IV trials Post-marketing studies, which are conducted after a treatment is approved for use by the FDA, provide additional information including the treatment or drug’s risks, benefits, and best use.

Examples of other kinds of clinical research Many people believe that all clinical research involves testing of new medications or devices. This is not true, however. Some studies do not involve testing medications and a person’s regular medications may not need to be changed. Healthy volunteers are also needed so that researchers can compare their results to results of people with the illness being studied. Some examples of other kinds of research include the following: 

A long-term study that involves psychological tests or brain scans

A genetic study that involves blood tests but no changes in medication

A study of family history that involves talking to family members to learn about people’s medical needs and history.

  • Clinical Trials

About Clinical Studies

Research: it's all about patients.

Mayo's mission is about the patient, the patient comes first. So the mission and research here, is to advance how we can best help the patient, how to make sure the patient comes first in care. So in many ways, it's a cycle. It can start with as simple as an idea, worked on in a laboratory, brought to the patient bedside, and if everything goes right, and let's say it's helpful or beneficial, then brought on as a standard approach. And I think that is one of the unique characteristics of Mayo's approach to research, that patient-centeredness. That really helps to put it in its own spotlight.

At Mayo Clinic, the needs of the patient come first. Part of this commitment involves conducting medical research with the goal of helping patients live longer, healthier lives.

Through clinical studies, which involve people who volunteer to participate in them, researchers can better understand how to diagnose, treat and prevent diseases or conditions.

Types of clinical studies

  • Observational study. A type of study in which people are observed or certain outcomes are measured. No attempt is made by the researcher to affect the outcome — for example, no treatment is given by the researcher.
  • Clinical trial (interventional study). During clinical trials, researchers learn if a new test or treatment works and is safe. Treatments studied in clinical trials might be new drugs or new combinations of drugs, new surgical procedures or devices, or new ways to use existing treatments. Find out more about the five phases of non-cancer clinical trials on ClinicalTrials.gov or the National Cancer Institute phases of cancer trials .
  • Medical records research. Medical records research involves the use of information collected from medical records. By studying the medical records of large groups of people over long periods of time, researchers can see how diseases progress and which treatments and surgeries work best. Find out more about Minnesota research authorization .

Clinical studies may differ from standard medical care

A health care provider diagnoses and treats existing illnesses or conditions based on current clinical practice guidelines and available, approved treatments.

But researchers are constantly looking for new and better ways to prevent and treat disease. In their laboratories, they explore ideas and test hypotheses through discovery science. Some of these ideas move into formal clinical trials.

During clinical studies, researchers formally and scientifically gather new knowledge and possibly translate these findings into improved patient care.

Before clinical trials begin

This video demonstrates how discovery science works, what happens in the research lab before clinical studies begin, and how a discovery is transformed into a potential therapy ready to be tested in trials with human participants:

How clinical trials work

Trace the clinical trial journey from a discovery research idea to a viable translatable treatment for patients:

See a glossary of terms related to clinical studies, clinical trials and medical research on ClinicalTrials.gov.

Watch a video about clinical studies to help you prepare to participate.

Let's Talk About Clinical Research

Narrator: This presentation is a brief introduction to the terms, purposes, benefits and risks of clinical research.

If you have questions about the content of this program, talk with your health care provider.

What is clinical research?

Clinical research is a process to find new and better ways to understand, detect, control and treat health conditions. The scientific method is used to find answers to difficult health-related questions.

Ways to participate

There are many ways to participate in clinical research at Mayo Clinic. Three common ways are by volunteering to be in a study, by giving permission to have your medical record reviewed for research purposes, and by allowing your blood or tissue samples to be studied.

Types of clinical research

There are many types of clinical research:

  • Prevention studies look at ways to stop diseases from occurring or from recurring after successful treatment.
  • Screening studies compare detection methods for common conditions.
  • Diagnostic studies test methods for early identification of disease in those with symptoms.
  • Treatment studies test new combinations of drugs and new approaches to surgery, radiation therapy and complementary medicine.
  • The role of inheritance or genetic studies may be independent or part of other research.
  • Quality of life studies explore ways to manage symptoms of chronic illness or side effects of treatment.
  • Medical records studies review information from large groups of people.

Clinical research volunteers

Participants in clinical research volunteer to take part. Participants may be healthy, at high risk for developing a disease, or already diagnosed with a disease or illness. When a study is offered, individuals may choose whether or not to participate. If they choose to participate, they may leave the study at any time.

Research terms

You will hear many terms describing clinical research. These include research study, experiment, medical research and clinical trial.

Clinical trial

A clinical trial is research to answer specific questions about new therapies or new ways of using known treatments. Clinical trials take place in phases. For a treatment to become standard, it usually goes through two or three clinical trial phases. The early phases look at treatment safety. Later phases continue to look at safety and also determine the effectiveness of the treatment.

Phase I clinical trial

A small number of people participate in a phase I clinical trial. The goals are to determine safe dosages and methods of treatment delivery. This may be the first time the drug or intervention is used with people.

Phase II clinical trial

Phase II clinical trials have more participants. The goals are to evaluate the effectiveness of the treatment and to monitor side effects. Side effects are monitored in all the phases, but this is a special focus of phase II.

Phase III clinical trial

Phase III clinical trials have the largest number of participants and may take place in multiple health care centers. The goal of a phase III clinical trial is to compare the new treatment to the standard treatment. Sometimes the standard treatment is no treatment.

Phase IV clinical trial

A phase IV clinical trial may be conducted after U.S. Food and Drug Administration approval. The goal is to further assess the long-term safety and effectiveness of a therapy. Smaller numbers of participants may be enrolled if the disease is rare. Larger numbers will be enrolled for common diseases, such as diabetes or heart disease.

Clinical research sponsors

Mayo Clinic funds clinical research at facilities in Rochester, Minnesota; Jacksonville, Florida; and Arizona, and in the Mayo Clinic Health System. Clinical research is conducted in partnership with other medical centers throughout the world. Other sponsors of research at Mayo Clinic include the National Institutes of Health, device or pharmaceutical companies, foundations and organizations.

Clinical research at Mayo Clinic

Dr. Hugh Smith, former chair of Mayo Clinic Board of Governors, stated, "Our commitment to research is based on our knowledge that medicine must be constantly moving forward, that we need to continue our efforts to better understand disease and bring the latest medical knowledge to our practice and to our patients."

This fits with the term "translational research," meaning what is learned in the laboratory goes quickly to the patient's bedside and what is learned at the bedside is taken back to the laboratory.

Ethics and safety of clinical research

All clinical research conducted at Mayo Clinic is reviewed and approved by Mayo's Institutional Review Board. Multiple specialized committees and colleagues may also provide review of the research. Federal rules help ensure that clinical research is conducted in a safe and ethical manner.

Institutional review board

An institutional review board (IRB) reviews all clinical research proposals. The goal is to protect the welfare and safety of human subjects. The IRB continues its review as research is conducted.

Consent process

Participants sign a consent form to ensure that they understand key facts about a study. Such facts include that participation is voluntary and they may withdraw at any time. The consent form is an informational document, not a contract.

Study activities

Staff from the study team describe the research activities during the consent process. The research may include X-rays, blood tests, counseling or medications.

Study design

During the consent process, you may hear different phrases related to study design. Randomized means you will be assigned to a group by chance, much like a flip of a coin. In a single-blinded study, participants do not know which treatment they are receiving. In a double-blinded study, neither the participant nor the research team knows which treatment is being administered.

Some studies use an inactive substance called a placebo.

Multisite studies allow individuals from many different locations or health care centers to participate.

Remuneration

If the consent form states remuneration is provided, you will be paid for your time and participation in the study.

Some studies may involve additional cost. To address costs in a study, carefully review the consent form and discuss questions with the research team and your insurance company. Medicare may cover routine care costs that are part of clinical trials. Medicaid programs in some states may also provide routine care cost coverage, as well.

When considering participation in a research study, carefully look at the benefits and risks. Benefits may include earlier access to new clinical approaches and regular attention from a research team. Research participation often helps others in the future.

Risks/inconveniences

Risks may include side effects. The research treatment may be no better than the standard treatment. More visits, if required in the study, may be inconvenient.

Weigh your risks and benefits

Consider your situation as you weigh the risks and benefits of participation prior to enrolling and during the study. You may stop participation in the study at any time.

Ask questions

Stay informed while participating in research:

  • Write down questions you want answered.
  • If you do not understand, say so.
  • If you have concerns, speak up.

Website resources are available. The first website lists clinical research at Mayo Clinic. The second website, provided by the National Institutes of Health, lists studies occurring in the United States and throughout the world.

Additional information about clinical research may be found at the Mayo Clinic Barbara Woodward Lips Patient Education Center and the Stephen and Barbara Slaggie Family Cancer Education Center.

Clinical studies questions

  • Phone: 800-664-4542 (toll-free)
  • Contact form

Cancer-related clinical studies questions

  • Phone: 855-776-0015 (toll-free)

International patient clinical studies questions

Clinical Studies in Depth

Learning all you can about clinical studies helps you prepare to participate.

  • Institutional Review Board

The Institutional Review Board protects the rights, privacy, and welfare of participants in research programs conducted by Mayo Clinic and its associated faculty, professional staff, and students.

More about research at Mayo Clinic

  • Research Faculty
  • Laboratories
  • Core Facilities
  • Centers & Programs
  • Departments & Divisions
  • Postdoctoral Fellowships
  • Training Grant Programs
  • Publications

Mayo Clinic Footer

  • Request Appointment
  • About Mayo Clinic
  • About This Site

Legal Conditions and Terms

  • Terms and Conditions
  • Privacy Policy
  • Notice of Privacy Practices
  • Notice of Nondiscrimination
  • Manage Cookies

Advertising

Mayo Clinic is a nonprofit organization and proceeds from Web advertising help support our mission. Mayo Clinic does not endorse any of the third party products and services advertised.

  • Advertising and sponsorship policy
  • Advertising and sponsorship opportunities

Reprint Permissions

A single copy of these materials may be reprinted for noncommercial personal use only. "Mayo," "Mayo Clinic," "MayoClinic.org," "Mayo Clinic Healthy Living," and the triple-shield Mayo Clinic logo are trademarks of Mayo Foundation for Medical Education and Research.

clinical research method

Principles of Research Methodology

A Guide for Clinical Investigators

  • © 2012
  • Phyllis G. Supino 0 ,
  • Jeffrey S. Borer 1

, Cardiovascular Medicine, SUNY Downstate Medical Center, Brooklyn, USA

You can also search for this editor in PubMed   Google Scholar

, Cardiovascualr Medicine, SUNY Downstate Medical Center, Brooklyn, USA

  • Based on a highly regarded and popular lecture series on research methodology
  • Comprehensive guide written by experts in the field
  • Emphasizes the essentials and fundamentals of research methodologies

76k Accesses

23 Citations

7 Altmetric

This is a preview of subscription content, log in via an institution to check access.

Access this book

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

About this book

Principles of Research Methodology: A Guide for Clinical Investigators is the definitive, comprehensive guide to understanding and performing clinical research. Designed for medical students, physicians, basic scientists involved in translational research, and other health professionals, this indispensable reference also addresses the unique challenges and demands of clinical research and offers clear guidance in becoming a more successful member of a medical research team and critical reader of the medical research literature. The book covers the entire research process, beginning with the conception of the research problem to publication of findings. Principles of Research Methodology: A Guide for Clinical Investigators comprehensively and concisely presents concepts in a manner that is relevant and engaging to read. The text combines theory and practical application to familiarize the reader with the logic of research design and hypothesis construction, the importance of research planning, the ethical basis of human subjects research, the basics of writing a clinical research protocol and scientific paper, the logic and techniques of data generation and management, and the fundamentals and implications of various sampling techniques and alternative statistical methodologies. Organized in thirteen easy to read chapters, the text emphasizes the importance of clearly-defined research questions and well-constructed hypothesis (reinforced throughout the various chapters) for informing methods and in guiding data interpretation. Written by prominent medical scientists and methodologists who have extensive personal experience in biomedical investigation and in teaching key aspects of research methodology to medical students, physicians and other health professionals, the authors expertly integrate theory with examples and employ language that is clear and useful for a general medical audience. A major contribution to the methodology literature, Principles of Research Methodology: A Guide for Clinical Investigators is an authoritative resource for all individuals who perform research, plan to perform it, or wish to understand it better.

Similar content being viewed by others

clinical research method

History of Research

clinical research method

Research (See Clinical Research; Research Ethics)

clinical research method

General Study Objectives

Table of contents (13 chapters), front matter, overview of the research process.

Phyllis G. Supino

Developing a Research Problem

  • Phyllis G. Supino, Helen Ann Brown Epstein

The Research Hypothesis: Role and Construction

Design and interpretation of observational studies: cohort, case–control, and cross-sectional designs.

  • Martin L. Lesser

Fundamental Issues in Evaluating the Impact of Interventions: Sources and Control of Bias

Protocol development and preparation for a clinical trial.

  • Joseph A. Franciosa

Data Collection and Management in Clinical Research

  • Mario Guralnik

Constructing and Evaluating Self-Report Measures

  • Peter L. Flom, Phyllis G. Supino, N. Philip Ross

Selecting and Evaluating Secondary Data: The Role of Systematic Reviews and Meta-analysis

  • Lorenzo Paladino, Richard H. Sinert

Sampling Methodology: Implications for Drawing Conclusions from Clinical Research Findings

  • Richard C. Zink

Introductory Statistics in Medical Research

  • Todd A. Durham, Gary G. Koch, Lisa M. LaVange

Ethical Issues in Clinical Research

  • Eli A. Friedman

How to Prepare a Scientific Paper

Jeffrey S. Borer

Back Matter

From the reviews:

Editors and Affiliations

Bibliographic information.

Book Title : Principles of Research Methodology

Book Subtitle : A Guide for Clinical Investigators

Editors : Phyllis G. Supino, Jeffrey S. Borer

DOI : https://doi.org/10.1007/978-1-4614-3360-6

Publisher : Springer New York, NY

eBook Packages : Medicine , Medicine (R0)

Copyright Information : Springer Science+Business Media, LLC 2012

Hardcover ISBN : 978-1-4614-3359-0 Published: 22 June 2012

Softcover ISBN : 978-1-4939-4292-3 Published: 23 August 2016

eBook ISBN : 978-1-4614-3360-6 Published: 22 June 2012

Edition Number : 1

Number of Pages : XVI, 276

Topics : Oncology , Cardiology , Internal Medicine , Endocrinology , Neurology

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

40k Accesses

54 Citations

58 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

clinical research method

NIMH Logo

Transforming the understanding and treatment of mental illnesses.

Información en español

Celebrating 75 Years! Learn More >>

  • Science News
  • Meetings and Events
  • Social Media
  • Press Resources
  • Email Updates
  • Innovation Speaker Series

What are the different types of clinical research?

February 18, 2021

There are many different types of clinical research because researchers study many different things.  

Treatment research usually tests an intervention such as medication, psychotherapy, new devices, or new approaches.

Prevention research looks for better ways to prevent disorders from developing or returning. Different kinds of prevention research may study medicines, vitamins, or lifestyle changes.  

Diagnostic research refers to the practice of looking for better ways to identify a particular disorder or condition.  

Screening research aims to find the best ways to detect certain disorders or health conditions. 

Genetic studies aim to improve our ability to predict disorders by identifying and understanding how genes and illnesses may be related. Research in this area may explore ways in which a person’s genes make him or her more or less likely to develop a disorder. This may lead to development of tailor-made treatments based on a patient’s genetic make-up.  

Epidemiological studies look at how often and why disorders happen in certain groups of people.

Research studies can be outpatient or inpatient. Outpatient means that participants do not stay overnight at the hospital or research center. Inpatient means that participants will need to stay at least one night in the hospital or research center.  

Thank you for your interest in learning more about clinical research!

National Institutes of Health (NIH) - Turning Discovery into Health

Research Methods Resources

Methods at a glance.

This section provides information and examples of methodological issues to be aware of when working with different study designs. Virtually all studies face methodological issues regarding the selection of the primary outcome(s), sample size estimation, missing outcomes, and multiple comparisons. Randomized studies face additional challenges related to the method for randomization. Other studies face specific challenges associated with their study design such as those that arise in effectiveness-implementation research; multiphase optimization strategy (MOST) studies; sequential, multiple assignment, randomized trials (SMART); crossover designs; non-inferiority trials; regression discontinuity designs; and paired availability designs. Some studies face issues involving exact tests, adherence to behavioral interventions, noncompliance in encouragement designs, evaluation of risk prediction models, or evaluation of surrogate endpoints.

Learn more about broadly applicable methods

Experiments, including clinical trials, differ considerably in the methods used to assign participants to study conditions (or study arms) and to deliver interventions to those participants.

This section provides information related to the design and analysis of experiments in which 

  • participants are assigned in groups (or clusters) and individual observations are analyzed to evaluate the effect of the intervention, 
  • participants are assigned individually but receive at least some of their intervention with other participants or through an intervention agent shared with other participants,
  • participants are assigned in groups (or clusters) but groups cross over to the intervention condition at pre-determined time points in sequential, staggered fashion until all groups receive the intervention, and
  • participants are assigned in groups, which are assigned to receive the intervention based on a cutoff value of some score value, and individual observations are used to evaluate the effect of the intervention.

This material is relevant for both human and animal studies as well as basic and applied research. And while it is important for investigators to become familiar with the issues presented on this website, it is even more important that they collaborate with a methodologist who is familiar with these issues.

In a parallel group-randomized trial (GRT), groups or clusters are randomized to study conditions, and observations are taken on the members of those groups with no crossover to a different condition during the trial.

Learn more about GRTs

In an individually randomized group-treatment (IRGT) trial, individuals are randomized to study conditions but receive at least some of their intervention with other participants or through an intervention agent shared with other participants.

Learn more about IRGTs

In a stepped wedge group- or cluster-randomized trial (SWGRT), groups or clusters are randomized to sequences that cross over to the intervention condition at predetermined time points in a staggered fashion until all groups receive the intervention.

Learn more about SWGRTs

In a group or cluster regression discontinuity design (GRDD), groups or clusters are assigned to study conditions if a group-level summary crosses a cut-off defined by an assignment score. Observations are taken on members of the groups.

Learn more about GRDDs

NIH Clinical Trial Requirements

The NIH launched a series of initiatives to enhance the accountability and transparency of clinical research. These initiatives target key points along the entire clinical trial lifecycle, from concept to reporting the results.

Check out the  Frequently Asked Questions  section or send us a message . 

Disclaimer: Substantial effort has been made to provide accurate and complete information on this website. However, we cannot guarantee that there will be no errors. Neither the U.S. Government nor the National Institutes of Health (NIH) assumes any legal liability for the accuracy, completeness, or usefulness of any information, products, or processes disclosed herein, or represents that use of such information, products, or processes would not infringe on privately owned rights. The NIH does not endorse or recommend any commercial products, processes, or services. The views and opinions of authors expressed on NIH websites do not necessarily state or reflect those of the U.S. Government, and they may not be used for advertising or product endorsement purposes.

Clinical research methods for treatment, diagnosis, prognosis, etiology, screening, and prevention: A narrative review

Affiliations.

  • 1 Department of Oncology, McMaster University, Hamilton, Ontario, Canada.
  • 2 Center for Clinical Practice Guideline Conduction and Evaluation, Children's Hospital of Fudan University, Shanghai, P.R. China.
  • 3 Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada.
  • 4 Department of Pediatrics, University of Antioquia, Colombia.
  • 5 Editorial Office, Chinese Journal of Evidence-Based Pediatrics, Children's Hospital of Fudan University, Shanghai, P.R. China.
  • 6 Division of Thoracic Surgery, Xuanwu Hospital, Capital Medical University, Beijing, P.R. China.
  • 7 Division of Neuropsychiatry and Behavioral Neurology and Clinical Psychology, Beijing Tiantan Hospital, Capital Medical University, Beijing, P.R. China.
  • 8 Division of Respirology, Tongren Hospital, Capital Medical University, Beijing, P.R. China.
  • 9 Division of Respirology, Xuanwu Hospital, Capital Medical University, Beijing, P.R. China.
  • 10 Division of Orthopedic Surgery, Juravinski Cancer Centre, McMaster University, Hamilton, Ontario, Canada.
  • PMID: 32445266
  • DOI: 10.1111/jebm.12384

This narrative review is an introduction for health professionals on how to conduct and report clinical research on six categories: treatment, diagnosis/differential diagnosis, prognosis, etiology, screening, and prevention. The importance of beginning with an appropriate clinical question and the exploration of how appropriate it is through a literature search are explained. There are three methodological directives that can assist clinicians in conducting their studies from a methodological perspective: (1) how to conduct an original study or a systematic review, (2) how to report an original study or a systematic review, and (3) how to assess the quality or risk of bias for a previous relevant original study or systematic review. This methodological overview article would provide readers with the key points and resources regarding how to perform high-quality research on the six main clinical categories.

Keywords: clinical research methods; diagnosis; literature search; prognosis; treatment.

© 2020 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

Publication types

  • Biomedical Research / methods*
  • Biomedical Research / standards
  • Mass Screening
  • Preventive Medicine / methods*
  • Systematic Reviews as Topic
  • Therapeutics / methods*

Foundations of Clinical Research

This Harvard Medical School six-month, application-based certificate program provides the essential skill sets and fundamental knowledge required to begin or expand your clinical research career.

Women at computer assessing research

Associated Schools

Harvard Medical School

Harvard Medical School

What you'll learn.

Understand and apply the foundational concepts of biostatistics and epidemiology

Develop a research question and formulate a testable hypothesis

Design and begin to implement a clinical research study

Cultivate the skills required to present a clinical research study

Critically evaluate the research findings in medical literature

Synthesize crucial statistical analyses using Stata software

Course description

The Foundations of Clinical Research program is rooted in the belief that clinical research training is critical to professional development in health care. Clinical research training not only creates potential independent investigators, but also enables clinicians to advance their careers through a greater understanding of research evidence. Designed to provide learners with the foundational knowledge and skill sets required to produce high-quality clinical research, our program will lay the fundamental groundwork in epidemiology and biostatistics required for a multifaceted career in clinical research.

The overarching goal of the Foundations of Clinical Research program is to equip the next generation of researchers with the skill sets essential to evaluating evidence, understanding biostatistics, and beginning their clinical research careers. Our aim is to ensure that learners develop a strong foundation in the design, implementation, analysis and interpretation of clinical research studies.

During the program, our innovative active learning approach emphasizes the traditional tutorial system with weekly live video tutorials, seminars and symposia anchored by 3 live intense weekend online workshops.  The Foundations of Clinical Research program’s six-month online curriculum emphasizes real-time skill-based learning. 

Participants will be eligible for Associate Alumni status upon successful completion of the program. Early tuition and need-based tuition reductions may be available.

Course Outline

Live Workshops

The interactive workshop curriculum will focus on hands-on skill development through active learning. To that end, the intensive schedule is designed to accelerate the growth of high-yield clinical research skills via individual and team-based workshop exercises. Students will be immersed in a dynamic learning environment that encourages collaboration and collegial networking with faculty and peers. 

Essential elements of the workshop include instruction and practical exercises in the core concepts of biostatistics, epidemiology and research question development, as well as critical assessment of the medical literature and practical training in statistical software using real-life datasets. In addition to providing training in mentorship, academic career development and leadership, we create a supportive and active learning environment where opportunities for knowledge retention and networking abound.

Live Symposia, Tutorials and Seminars

Symposia, tutorials and seminars are mandatory and will be delivered live online and organized according to eight specific clinical research topics. 

Eight 3-Hour Symposia

  • Instruction on a specific clinical research topic (e.g., cohort study design and interpretation)
  • In-depth discussion on a related epidemiology concept (e.g., odds ratio)
  • Hands-on guidance for implementing the related analysis with statistical programming in Stata

Eight 1-Hour Tutorials

  • Interpret and report on papers related to the specific clinical research topic

Eight 1-Hour Special-Topic Seminars

  • The biostatistical and epidemiological concepts to specific clinical research topics with concrete examples

Assignments

All students will be expected to complete all assignments by the due dates. Assignments will be graded as either “pass” or “fail.”

Individual Assignment 1

Individual Research Question and Study Design

  • Generate a novel research question in the evidence-based PICO format
  • Receive expert faculty review

Individual Assignment 2

Design, Implement and Present an Original Abstract

  • Design and implement a clinical research study based on a publicly available dataset
  • Analyze and create data visualizations via a user-friendly R Shiny web app
  • Write a formal 350-word abstract suitable for submission to an international conference
  • Present a digital poster to faculty at Workshop 3

Online Lectures

Research Study Introduction 

  • Designing a Clinical Research Study I–III
  • Introduction to Evidence-Based Medicine, Systematic Review and Meta-Analysis
  • Study Design 1 – Observational
  • Study Design 2 – Randomized Controlled Trials
  • Study Design 3 – Quasi-Experimental Studies
  • Introduction to Biostatistics
  • An Investigator’s Responsibility for Protection of Research Subjects
  • How to Search PubMed
  • Overview of Evidence-Based Medicine

Statistical Programming in Stata

  • Loading Data
  • Basic Programming Commands
  • Data Cleansing
  • Data Analytics I – Central Tendency
  • Data Analytics II – Statistical Testing
  • Data Analytics III – Regression Testing

Instructors

Jamie Robertson

Jamie Robertson

Djøra Soeteman

Djøra Soeteman

Join our list to learn more.

  • Download PDF
  • Share X Facebook Email LinkedIn
  • Permissions

Why Should the FDA Focus on Pragmatic Clinical Research?

  • 1 US Food and Drug Administration, White Oak Campus, Silver Spring, Maryland
  • Editor's Note Integrating Clinical Trials and Practice Gregory Curfman, MD JAMA
  • Special Communication The Integration of Clinical Trials With the Practice of Medicine Derek C. Angus, MD, MPH; Alison J. Huang, MD, MAS; Roger J. Lewis, MD, PhD; Amy P. Abernethy, MD, PhD; Robert M. Califf, MD; Martin Landray, PhD; Nancy Kass, ScD; Kirsten Bibbins-Domingo, PhD, MD, MAS; JAMA Summit on Clinical Trials Participants; Ali B Abbasi; Kaleab Z Abebe; Amy P Abernethy; Stacey J. Adam; Derek C Angus; Jamy Ard; Rachel A Bender Ignacio ; Scott M Berry; Deepak L. Bhatt; Kirsten Bibbins-Domingo; Robert O. Bonow; Marc Bonten; Sharon A. Brangman; John Brownstein; Melinda J. B. Buntin; Atul J Butte; Robert M. Califf; Marion K Campbell; Anne R. Cappola; Anne C Chiang; Deborah Cook; Steven R Cummings; Gregory Curfman; Laura J Esserman; Lee A Fleisher; Joseph B Franklin; Ralph Gonzalez; Cynthia I Grossman; Tufia C. Haddad; Roy S. Herbst; Adrian F. Hernandez; Diane P Holder; Leora Horn; Grant D. Huang; Alison Huang; Nancy Kass; Rohan Khera; Walter J. Koroshetz; Harlan M. Krumholz; Martin Landray; Roger J. Lewis; Tracy A Lieu; Preeti N. Malani; Christa Lese Martin; Mark McClellan; Mary M. McDermott; Stephanie R. Morain; Susan A Murphy; Stuart G Nicholls; Stephen J Nicholls; Peter J. O'Dwyer; Bhakti K Patel; Eric Peterson; Sheila A. Prindiville; Joseph S. Ross; Kathryn M Rowan; Gordon Rubenfeld; Christopher W. Seymour; Rod S Taylor; Joanne Waldstreicher; Tracy Y. Wang JAMA

Traditional randomized clinical trials (RCTs) have long been a key tool underpinning drug and device development. The use of individual participant randomization and active or placebo controls in RCTs, combined with comprehensive collection of highly structured data, supports assay sensitivity. At the same time, focused enrollment criteria and careful attention to the collection of adverse events for specified follow-up periods promote detection of toxicities and risks. These trials support a system, regulated by the US Food and Drug Administration (FDA) and other global regulators, that allows the majority of candidate therapies whose risks outweigh benefits for intended use to be screened out while enabling safe and effective medical products to advance to market. However, the next stage—after product development and marketing authorization are completed and a therapy is integrated into clinical practice—needs serious attention.

  • Editor's Note Integrating Clinical Trials and Practice JAMA

Read More About

Abbasi AB , Curtis LH , Califf RM. Why Should the FDA Focus on Pragmatic Clinical Research? JAMA. Published online June 03, 2024. doi:10.1001/jama.2024.6227

Manage citations:

© 2024

Artificial Intelligence Resource Center

Cardiology in JAMA : Read the Latest

Browse and subscribe to JAMA Network podcasts!

Others Also Liked

Select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing
  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

Skip to content

Read the latest news stories about Mailman faculty, research, and events. 

Departments

We integrate an innovative skills-based curriculum, research collaborations, and hands-on field experience to prepare students.

Learn more about our research centers, which focus on critical issues in public health.

Our Faculty

Meet the faculty of the Mailman School of Public Health. 

Become a Student

Life and community, how to apply.

Learn how to apply to the Mailman School of Public Health. 

Clinical Research Methods

Director: Todd Ogden, PhD

The Mailman School offers the degree of  Master of Science in Biostatistics, with an emphasis on issues in the statistical analysis and design of clinical studies. The Clinical Research Methods track was conceived and designed for clinicians who are pursuing research careers in academic medicine.  Candidacy in the CRM program is open to anyone who holds a medical/doctoral degree and/or has several years of clinical research experience.

Competencies

In addition to achieving the MS in Biostatistics core competencies, graduates of the 30 credit MS Clinical Research Methods Track develop specific competencies in data analysis and computing, public health and collaborative research, and data management. MS/CRM graduates will be able to:

Data Analysis and Computing

  • Apply the basic tenets of research design and analysis for the purpose of critically reviewing research and programs in disciplines outside of biostatistics;
  • Differentiate between quantitative problems that can be addressed with standard methods and those requiring input from a professional biostatistician.

Public Health and Collaborative Research

  • Formulate and prepare a written statistical plan for analysis of public health research data that clearly reflects the research hypotheses of the proposal in a manner that resonates with both co-investigators and peer reviewers;
  • Prepare written summaries of quantitative analyses for journal publication, presentations at scientific meetings, grant applications, and review by regulatory agencies;

Data Management

  • Identify the uses to which data management can be put in practical statistical analysis, including the establishment of standards for documentation, archiving, auditing, and confidentiality; guidelines for accessibility; security; structural issues; and data cleaning;
  • Differentiate between analytical and data management functions through knowledge of the role and functions of databases, different types of data storage, and the advantages and limitations of rigorous database systems in conjunction with statistical tools;
  • Describe the different types of database management systems, the ways these systems can provide data for analysis and interact with statistical software, and methods for evaluating technologies pertinent to both; and
  • Assess database tools and the database functions of statistical software, with a view to explaining the impact of data management processes and procedures on their own research. 

Required Courses

The required courses enable degree candidates to gain proficiency in study design, application of commonly-used statistical procedures, use of statistical software packages, and successful interpretation and communication of analysis results. A required course may be waived for students with demonstrated expertise in that field of study. If a student places out of one or more required courses, that student must substitute other courses, perhaps a more advanced course in the same area or another elective course in biostatistics or another discipline, with the approval of the student’s faculty advisor.

The program, which consists of 30 credits of coursework and research, may be completed in one year, provided the candidate begins study during the summer semester of his or her first year. If preferred, candidates may pursue the MS/CRM on a part-time basis. The degree program must be completed within five years of the start date.

The curriculum, described below, is comprised of 24 credits of required courses, including a 3-credit research project (the “Master’s essay”) to be completed during the final year of study, and two electives of 6 credits. Note that even if a course is waived, students must still complete a minimum of 30 credits to be awarded the MS degree.

Commonly chosen elective courses include:

Master's Essay

As part of MS/CRM training, each student is required to register for the 3-credit Master's essay course (P9160). This course provides direct support and supervision for the completion of the required research project, or Master's essay, consisting of a research paper of publishable quality. CRM candidates should register for the Master's essay during the spring semester of their final year of study. Students are required to come to the Master's essay course with research data in hand for analysis and interpretation.

CRM graduates have written excellent Master's essays over the years, many of which were ultimately published in the scientific literature. Some titles include:

  • A Comprehensive Analysis of the Natural History and the Effect of Treatment on Patients with Malignant Pleural Mesothelioma
  • Prevalence and Modification of Cardiovascular Risk Factors in Early Chronic Kidney Disease: Data from the Third National Health and Nutrition Examination Survey
  • Perspectives on Pediatric Outcomes: A Comparison of Parents' and Children's Ratings of Health-Related Quality of Life
  • Clinical and Demographic Profiles of Cancer Discharges throughout New York State Compared to Corresponding Incidence Rates, 1990-1994

Sample Timeline

Candidates may choose to complete the CRM program track on a part-time basis, or complete all requirements within one year (July through May). To complete the degree in one year, coursework must commence during the summer term. 

Note that course schedules change from year to year, so that class days/times in future years will differ from the sample schedule below; you must check the current course schedule for each year on the course directory page .

Paul McCullough Director of Academic Programs Department of Biostatistics Columbia University [email protected] 212-342-3417

More information on Admission Requirements and Deadlines.

Weill Cornell Medicine

  • Weill Cornell Medicine

Wayfinder menu

  • National CTSA

Clinical & Translational Science Center

Clinical Research Methodology Curriculum

Application instructions.

PDF icon

The Clinical Research Methodology Curriculum (CRMC)  is a one-year clinical research methodology for investigators with clinical research experience seeking to obtain up-to-date knowledge in the field of clinical research. It  is conducted at Memorial Sloan Kettering Cancer Center to promote greater flexibility for trainees from across the CTSC partner institutes. The CRMC curriculum allow participants to either enroll in the entire program or audit specific components that address self-identified educational needs.

The Clinical Research Methodology Curriculum is currently accepting applications for the 2023-2024 academic year. The application deadline to submit is  Friday, August 18, 2023 at  5:00PM .

PDF icon

Clinical & Translational Science Center 1300 York Ave., Box 149 New York, NY 10065

Case Study Research Method in Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews).

The case study research method originated in clinical medicine (the case history, i.e., the patient’s personal history). In psychology, case studies are often confined to the study of a particular individual.

The information is mainly biographical and relates to events in the individual’s past (i.e., retrospective), as well as to significant events that are currently occurring in his or her everyday life.

The case study is not a research method, but researchers select methods of data collection and analysis that will generate material suitable for case studies.

Freud (1909a, 1909b) conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

This makes it clear that the case study is a method that should only be used by a psychologist, therapist, or psychiatrist, i.e., someone with a professional qualification.

There is an ethical issue of competence. Only someone qualified to diagnose and treat a person can conduct a formal case study relating to atypical (i.e., abnormal) behavior or atypical development.

case study

 Famous Case Studies

  • Anna O – One of the most famous case studies, documenting psychoanalyst Josef Breuer’s treatment of “Anna O” (real name Bertha Pappenheim) for hysteria in the late 1800s using early psychoanalytic theory.
  • Little Hans – A child psychoanalysis case study published by Sigmund Freud in 1909 analyzing his five-year-old patient Herbert Graf’s house phobia as related to the Oedipus complex.
  • Bruce/Brenda – Gender identity case of the boy (Bruce) whose botched circumcision led psychologist John Money to advise gender reassignment and raise him as a girl (Brenda) in the 1960s.
  • Genie Wiley – Linguistics/psychological development case of the victim of extreme isolation abuse who was studied in 1970s California for effects of early language deprivation on acquiring speech later in life.
  • Phineas Gage – One of the most famous neuropsychology case studies analyzes personality changes in railroad worker Phineas Gage after an 1848 brain injury involving a tamping iron piercing his skull.

Clinical Case Studies

  • Studying the effectiveness of psychotherapy approaches with an individual patient
  • Assessing and treating mental illnesses like depression, anxiety disorders, PTSD
  • Neuropsychological cases investigating brain injuries or disorders

Child Psychology Case Studies

  • Studying psychological development from birth through adolescence
  • Cases of learning disabilities, autism spectrum disorders, ADHD
  • Effects of trauma, abuse, deprivation on development

Types of Case Studies

  • Explanatory case studies : Used to explore causation in order to find underlying principles. Helpful for doing qualitative analysis to explain presumed causal links.
  • Exploratory case studies : Used to explore situations where an intervention being evaluated has no clear set of outcomes. It helps define questions and hypotheses for future research.
  • Descriptive case studies : Describe an intervention or phenomenon and the real-life context in which it occurred. It is helpful for illustrating certain topics within an evaluation.
  • Multiple-case studies : Used to explore differences between cases and replicate findings across cases. Helpful for comparing and contrasting specific cases.
  • Intrinsic : Used to gain a better understanding of a particular case. Helpful for capturing the complexity of a single case.
  • Collective : Used to explore a general phenomenon using multiple case studies. Helpful for jointly studying a group of cases in order to inquire into the phenomenon.

Where Do You Find Data for a Case Study?

There are several places to find data for a case study. The key is to gather data from multiple sources to get a complete picture of the case and corroborate facts or findings through triangulation of evidence. Most of this information is likely qualitative (i.e., verbal description rather than measurement), but the psychologist might also collect numerical data.

1. Primary sources

  • Interviews – Interviewing key people related to the case to get their perspectives and insights. The interview is an extremely effective procedure for obtaining information about an individual, and it may be used to collect comments from the person’s friends, parents, employer, workmates, and others who have a good knowledge of the person, as well as to obtain facts from the person him or herself.
  • Observations – Observing behaviors, interactions, processes, etc., related to the case as they unfold in real-time.
  • Documents & Records – Reviewing private documents, diaries, public records, correspondence, meeting minutes, etc., relevant to the case.

2. Secondary sources

  • News/Media – News coverage of events related to the case study.
  • Academic articles – Journal articles, dissertations etc. that discuss the case.
  • Government reports – Official data and records related to the case context.
  • Books/films – Books, documentaries or films discussing the case.

3. Archival records

Searching historical archives, museum collections and databases to find relevant documents, visual/audio records related to the case history and context.

Public archives like newspapers, organizational records, photographic collections could all include potentially relevant pieces of information to shed light on attitudes, cultural perspectives, common practices and historical contexts related to psychology.

4. Organizational records

Organizational records offer the advantage of often having large datasets collected over time that can reveal or confirm psychological insights.

Of course, privacy and ethical concerns regarding confidential data must be navigated carefully.

However, with proper protocols, organizational records can provide invaluable context and empirical depth to qualitative case studies exploring the intersection of psychology and organizations.

  • Organizational/industrial psychology research : Organizational records like employee surveys, turnover/retention data, policies, incident reports etc. may provide insight into topics like job satisfaction, workplace culture and dynamics, leadership issues, employee behaviors etc.
  • Clinical psychology : Therapists/hospitals may grant access to anonymized medical records to study aspects like assessments, diagnoses, treatment plans etc. This could shed light on clinical practices.
  • School psychology : Studies could utilize anonymized student records like test scores, grades, disciplinary issues, and counseling referrals to study child development, learning barriers, effectiveness of support programs, and more.

How do I Write a Case Study in Psychology?

Follow specified case study guidelines provided by a journal or your psychology tutor. General components of clinical case studies include: background, symptoms, assessments, diagnosis, treatment, and outcomes. Interpreting the information means the researcher decides what to include or leave out. A good case study should always clarify which information is the factual description and which is an inference or the researcher’s opinion.

1. Introduction

  • Provide background on the case context and why it is of interest, presenting background information like demographics, relevant history, and presenting problem.
  • Compare briefly to similar published cases if applicable. Clearly state the focus/importance of the case.

2. Case Presentation

  • Describe the presenting problem in detail, including symptoms, duration,and impact on daily life.
  • Include client demographics like age and gender, information about social relationships, and mental health history.
  • Describe all physical, emotional, and/or sensory symptoms reported by the client.
  • Use patient quotes to describe the initial complaint verbatim. Follow with full-sentence summaries of relevant history details gathered, including key components that led to a working diagnosis.
  • Summarize clinical exam results, namely orthopedic/neurological tests, imaging, lab tests, etc. Note actual results rather than subjective conclusions. Provide images if clearly reproducible/anonymized.
  • Clearly state the working diagnosis or clinical impression before transitioning to management.

3. Management and Outcome

  • Indicate the total duration of care and number of treatments given over what timeframe. Use specific names/descriptions for any therapies/interventions applied.
  • Present the results of the intervention,including any quantitative or qualitative data collected.
  • For outcomes, utilize visual analog scales for pain, medication usage logs, etc., if possible. Include patient self-reports of improvement/worsening of symptoms. Note the reason for discharge/end of care.

4. Discussion

  • Analyze the case, exploring contributing factors, limitations of the study, and connections to existing research.
  • Analyze the effectiveness of the intervention,considering factors like participant adherence, limitations of the study, and potential alternative explanations for the results.
  • Identify any questions raised in the case analysis and relate insights to established theories and current research if applicable. Avoid definitive claims about physiological explanations.
  • Offer clinical implications, and suggest future research directions.

5. Additional Items

  • Thank specific assistants for writing support only. No patient acknowledgments.
  • References should directly support any key claims or quotes included.
  • Use tables/figures/images only if substantially informative. Include permissions and legends/explanatory notes.
  • Provides detailed (rich qualitative) information.
  • Provides insight for further research.
  • Permitting investigation of otherwise impractical (or unethical) situations.

Case studies allow a researcher to investigate a topic in far more detail than might be possible if they were trying to deal with a large number of research participants (nomothetic approach) with the aim of ‘averaging’.

Because of their in-depth, multi-sided approach, case studies often shed light on aspects of human thinking and behavior that would be unethical or impractical to study in other ways.

Research that only looks into the measurable aspects of human behavior is not likely to give us insights into the subjective dimension of experience, which is important to psychoanalytic and humanistic psychologists.

Case studies are often used in exploratory research. They can help us generate new ideas (that might be tested by other methods). They are an important way of illustrating theories and can help show how different aspects of a person’s life are related to each other.

The method is, therefore, important for psychologists who adopt a holistic point of view (i.e., humanistic psychologists ).

Limitations

  • Lacking scientific rigor and providing little basis for generalization of results to the wider population.
  • Researchers’ own subjective feelings may influence the case study (researcher bias).
  • Difficult to replicate.
  • Time-consuming and expensive.
  • The volume of data, together with the time restrictions in place, impacted the depth of analysis that was possible within the available resources.

Because a case study deals with only one person/event/group, we can never be sure if the case study investigated is representative of the wider body of “similar” instances. This means the conclusions drawn from a particular case may not be transferable to other settings.

Because case studies are based on the analysis of qualitative (i.e., descriptive) data , a lot depends on the psychologist’s interpretation of the information she has acquired.

This means that there is a lot of scope for Anna O , and it could be that the subjective opinions of the psychologist intrude in the assessment of what the data means.

For example, Freud has been criticized for producing case studies in which the information was sometimes distorted to fit particular behavioral theories (e.g., Little Hans ).

This is also true of Money’s interpretation of the Bruce/Brenda case study (Diamond, 1997) when he ignored evidence that went against his theory.

Breuer, J., & Freud, S. (1895).  Studies on hysteria . Standard Edition 2: London.

Curtiss, S. (1981). Genie: The case of a modern wild child .

Diamond, M., & Sigmundson, K. (1997). Sex Reassignment at Birth: Long-term Review and Clinical Implications. Archives of Pediatrics & Adolescent Medicine , 151(3), 298-304

Freud, S. (1909a). Analysis of a phobia of a five year old boy. In The Pelican Freud Library (1977), Vol 8, Case Histories 1, pages 169-306

Freud, S. (1909b). Bemerkungen über einen Fall von Zwangsneurose (Der “Rattenmann”). Jb. psychoanal. psychopathol. Forsch ., I, p. 357-421; GW, VII, p. 379-463; Notes upon a case of obsessional neurosis, SE , 10: 151-318.

Harlow J. M. (1848). Passage of an iron rod through the head.  Boston Medical and Surgical Journal, 39 , 389–393.

Harlow, J. M. (1868).  Recovery from the Passage of an Iron Bar through the Head .  Publications of the Massachusetts Medical Society. 2  (3), 327-347.

Money, J., & Ehrhardt, A. A. (1972).  Man & Woman, Boy & Girl : The Differentiation and Dimorphism of Gender Identity from Conception to Maturity. Baltimore, Maryland: Johns Hopkins University Press.

Money, J., & Tucker, P. (1975). Sexual signatures: On being a man or a woman.

Further Information

  • Case Study Approach
  • Case Study Method
  • Enhancing the Quality of Case Studies in Health Services Research
  • “We do things together” A case study of “couplehood” in dementia
  • Using mixed methods for evaluating an integrative approach to cancer care: a case study

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

  • Open access
  • Published: 05 June 2024

Current status and ongoing needs for the teaching and assessment of clinical reasoning – an international mixed-methods study from the students` and teachers` perspective

  • F. L Wagner 1 ,
  • M. Sudacka 2 ,
  • A. A Kononowicz 3 ,
  • M. Elvén 4 , 5 ,
  • S. J Durning 6 ,
  • I. Hege 7 &
  • S. Huwendiek 1  

BMC Medical Education volume  24 , Article number:  622 ( 2024 ) Cite this article

46 Accesses

Metrics details

Clinical reasoning (CR) is a crucial ability that can prevent errors in patient care. Despite its important role, CR is often not taught explicitly and, even when it is taught, typically not all aspects of this ability are addressed in health professions education. Recent research has shown the need for explicit teaching of CR for both students and teachers. To further develop the teaching and learning of CR we need to improve the understanding of students' and teachers' needs regarding content as well as teaching and assessment methods for a student and trainer CR curriculum.

Parallel mixed-methods design that used web-surveys and semi-structured interviews to gather data from both students (n survey  = 100; n interviews  = 13) and teachers (n survey  = 112; n interviews  = 28). The interviews and surveys contained similar questions to allow for triangulation of the results. This study was conducted as part of the EU-funded project DID-ACT ( https://did-act.eu ).

Both the surveys and interview data emphasized the need for content in a clinical reasoning (CR) curriculum such as “gathering, interpreting and synthesizing patient information”, “generating differential diagnoses”, “developing a diagnostic and a treatment plan” and “collaborative and interprofessional aspects of CR”. There was high agreement that case-based learning and simulations are most useful for teaching CR. Clinical and oral examinations were favored for the assessment of CR. The preferred format for a train-the-trainer (TTT)-course was blended learning. There was also some agreement between the survey and interview participants regarding contents of a TTT-course (e.g. teaching and assessment methods for CR). The interviewees placed special importance on interprofessional aspects also for the TTT-course.

Conclusions

We found some consensus on needed content, teaching and assessment methods for a student and TTT-course in CR. Future research could investigate the effects of CR curricula on desired outcomes, such as patient care.

Peer Review reports

Introduction

Clinical reasoning (CR) is a universal ability that mobilizes integration of necessary fundamental knowledge while delivering high-quality patient care in a variety of contexts in a timely and effective way [ 1 , 2 ]. Daniel et al. [ 3 ] define it as a “skill, process or outcome wherein clinicians observe, collect, and interpret data to diagnose and treat patients”. CR encompasses health professionals thinking and acting in patient assessment, diagnostic, and management processes in clinical situations, taking into account the patient ‘s specific circumstances and preferences [ 4 ]. How CR is defined can vary between health professions, but there are also similarities [ 5 ]. Poor CR is associated with low-quality patient care and increases the risk of medical errors [ 6 ]. Berner and Graber [ 7 ] suggested that the rate of diagnostic error is around 15%, underlining the threat that insufficient CR ability poses to patient safety as well as increasing healthcare costs [ 8 ]. Despite the importance of CR, it appears to be rarely taught or assessed explicitly, often only parts of the CR process are covered in existing curricula, and there seems to be a lack of progression throughout curricula (e.g. [ 9 , 10 , 11 , 12 , 13 , 14 ].). Moreover, teachers are often not trained to explicitly teach CR, including explaining their own reasoning to others [ 10 , 11 , 12 ] although this appears to be an important factor in the implementation of a CR curriculum [ 15 ]. Some teachers even question whether CR can be explicitly taught [ 16 ]. Considering these findings, efforts should be made to incorporate explicit teaching of CR into health care professions curricula and training for teachers should be established based on best evidence. However, to date, little is known about what a longitudinal CR curriculum should incorporate to meet the needs of teachers and students.

Insights regarding teaching CR were provided from a global survey by Kononowicz et al. [ 10 ], who reported a need for a longitudinal CR curriculum. However, the participants in their study were mainly health professions educators, leaving the needs of students for a CR curriculum largely unknown. As students are future participants of a CR curriculum, their needs should also be investigated. Kononowicz et al. [ 10 ] also identified a lack of qualified faculty to teach CR. A train-the-trainer course for CR could help reduce this barrier to teaching CR. To the best of our knowledge, in addition to the work by Kononowicz et al. [ 10 ], no research exists yet that addresses the needs of teachers for such a course, and Kononowicz et al. [ 10 ] did not investigate their needs beyond course content. Recently, Gupta et al. [ 12 ] and Gold et al. [ 13 ] conducted needs analyses regarding clinical reasoning instruction from the perspective of course directors at United States medical schools, yet a European perspective is missing. Thus, our research questions were the following:

What aspects of clinical reasoning are currently taught and how important are they in a clinical reasoning curriculum according to teachers and students?

What methods are currently used to teach and assess clinical reasoning and which methods would be ideal according to teachers and students?

In what study year does the teaching of clinical reasoning currently begin and when should it ideally begin according to teachers and students?

How should a train-the-trainer course for teachers of clinical reasoning be constructed regarding content and format?

In this study, we used a convergent parallel mixed-methods design [ 17 ] within a pragmatic constructivist case study approach [ 18 ]. We simultaneously collected data from students and educators using online questionnaires and semi-structured interviews to gain deeper insight into their needs on one particular situation [ 19 ]– the development of a clinical reasoning curriculum—to address our research questions. To help ensure that the results of the survey and the interviews could be compared and integrated, we constructed the questions for the survey and the interviews similarly with the exception that in the interviews, the questions were first asked openly. The design was parallel both in that we collected data simultaneously and also constructed the survey and interviews to cover similar topics. We chose this approach to obtain comprehensive answers to the research questions and to facilitate later triangulation [ 17 ] of the results.

Context of this study

We conducted this study within the EU-funded (Erasmus + program) project DID-ACT (“Developing, implementing, and disseminating an adaptive clinical reasoning curriculum for healthcare students and educators”; https://did-act.eu ). Institutions from six European countries (Augsburg University, Germany; Jagiellonian University in Kraków, Poland; Maribor University, Slovenia; Örebro University, Sweden; University of Bern, Switzerland; EDU, a higher medical education institution based in Malta, Instruct GmbH, Munich, Germany) with the support of associate partners (e.g., Prof. Steven Durning, Uniformed Services University of the Health Sciences, USA; Mälardalen University, Sweden.) were part of this project. For further information, see https://did-act.eu/team-overview/team/ . In this project, we developed an interprofessional longitudinal clinical reasoning curriculum for students in healthcare education and a train-the-trainer course for health profession educators. The current curriculum (for a description of the curriculum, see Hege et al. [ 20 ]) was also informed by this study. This study was part of the Erasmus + Knowledge Alliance DID-ACT (612,454-EPP-1–2019-1-DE-EPPKA2-KA).

Target groups

We identified two relevant target groups for this study, teachers and students, which are potential future users and participants of a train—the—trainer (TTT-) course and a clinical reasoning curriculum, respectively. The teacher group also included individuals who were considered knowledgeable regarding the current status of clinical reasoning teaching and assessment at their institutions (e.g. curriculum managers). These specific participants were individually selected by the DID-ACT project team to help ensure that they had the desired level of expertise. The target groups included different health professions from a large number of countries (see Table  1 ), as we wanted to gather insights that are not restricted to one profession.

Development of data collection instruments

Development of questions.

The questions in this study addressed the current status and needs regarding content, teaching, and assessment of clinical reasoning (CR). They were based on the questions used by Kononowicz et al. [ 10 ] and were expanded to obtain more detailed information. Specifically, regarding CR content, we added additional aspects (see Table 8 in the Appendix for details). The contents covered in this part of the study also align with the five domains of CR education (clinical reasoning concepts, history and physical examination, choosing and interpreting diagnostic tests, problem identification and management and shared decision-making) that were reported by Cooper et al. [ 14 ]. It has been shown that there are similarities between professions regarding the definition of CR (e.g. history taking or an emphasis on clinical skills), while nurses placed greater importance on a patient-centered approach [ 5 ]. We aimed to cover as many aspects of CR in the contents as possible to represent these findings. We expanded the questions on CR teaching formats to cover a broader range of formats. Furthermore, two additional assessment methods were added to the respective questions. Finally, one aspect was added to the content questions for a train-the-trainer course (see Table 8 in the Appendix ). As a lack of qualified faculty to teach CR was identified in the study by Kononowicz et al. [ 10 ], we added additional questions on the specific needs for the design of a CR train-the-trainer course beyond content. Table 8 in the Appendix shows the adaptations that we made in detail.

We discussed the questions within the interprofessional DID-ACT project team and adapted them in several iterative cycles until the final versions of the survey questionnaire and the interview guide were obtained and agreed upon. We tested the pre-final versions with think-alouds [ 21 ] to ensure that the questions were understandable and interpreted as intended, which led to a few changes. The survey questionnaires and interview-guides can be found at https://did-act.eu/results/ and accessed via links in table sections D1.1a (survey questions) and D1.1b (interview guides), respectively. Of these questions, we included only those relevant to the research questions addressed in this study. The questions included in this study can be found in the Appendix in Table8.

Teachers were asked questions about all content areas, but only the expert subgroup was asked to answer questions on the current situation regarding the teaching and assessment of clinical reasoning at their institutions, as they were considered the best informed group on the matter. Furthermore, students were not asked questions on the train-the-trainer course. Using the abovementioned procedures, we also hoped to improve the response rate as longer surveys were found to be associated with lower response rates [ 22 ].

We created two different versions of the interview guide, one for teachers and one for students. The student interview guide did not contain questions on the current status of clinical reasoning teaching and assessment or questions about the train-the-trainer course. The interview guides were prepared with detailed instructions to ensure that the interviews were conducted in a comparable manner at all locations. By using interviews, we intended to obtain a broad picture of existing needs. Individual interviews further allowed participants to speak their own languages and thus to express themselves naturally and as precisely as possible.

Reflexivity statement

Seven researchers representing different perspectives and professions form the study team. MS has been a PhD candidate representing the junior researcher perspective, while also experienced researchers with a broad background in clinical reasoning and qualitative as well as quantitative research are part of the team (SD, SH, AK, IH, ME, FW). ME represents the physiotherapist perspective, SD, SH, and MS represent the medical perspective. We discussed all steps of the study in the team and made joint decisions.

Data collection and analysis

The survey was created using LimeSurvey software (LimeSurvey GmbH). The survey links were distributed via e-mail (individual invitations, posts to institutional mailing lists, newsletters) by the DID-ACT project team and associate partners (the target groups received specific links to the online-survey). The e-mail contained information on the project and its goals. By individually contacting persons in the local language, we hoped to increase the likelihood of participation. The survey was anonymous. The data were collected from March to July 2020.

Potential interview participants were contacted personally by the DID-ACT project team members in their respective countries. We used a convenience sampling approach by personally contacting potential interview partners in the local language to motivate as many participants as possible. With this approach we also hoped to increase the likelihood of participation. The interviews were conducted in the local languages also to avoid language barriers and were audio-recorded to help with the analysis and for documentation purposes. Most interviews were conducted using online meeting services (e.g. Skype or Zoom) because of restrictions due to the ongoing coronavirus pandemic that occurred with the start of data collection at the beginning of the DID-ACT project. The data were collected from March to July 2020. All interview partners provided informed consent.

Ethics approval and consent to participate

We asked the Bern Ethics Committee to approve this multi-institutional study. This type of study was regarded as exempt from formal ethical approval according to the regulations of the Bern Ethics Committee (‘Kantonale Ethikkommission Bern’, decision Req-2020–00074). All participants voluntarily participated and provided informed consent before taking part in this study.

Data analysis

Descriptive analyses were performed using SPSS statistics software (version 28, 2021). Independent samples t-tests were computed for comparisons between teachers and students. When the variances of the two groups were unequal, Welch’s test was used. Bonferroni correction of significance levels was used to counteract alpha error accumulation in repeated tests. The answers to the free text questions were screened for recurring themes. There were very few free-text comments, typically repeating aspects from the closed questions, hence, no meaningful analysis was possible. For this reason, the survey comments are mentioned only where they made a unique contribution to the results.

The interviews were translated into English by the partners. An overarching summarizing qualitative content analysis [ 23 ] of the data was conducted. A summarizing content analysis is particularly useful when the content level of the material is of interest. Its goal is to reduce the material to manageable short texts in a way that retains the essential meaning [ 23 ]. The analysis was conducted first by two of the authors of the study (FW, SH) and then discussed by the entire author team. The analysis was carried out as an iterative process until a complete consensus was reached within the author team.

The results from the surveys and interviews were compared and are presented together in the results section. The qualitative data are reported in accordance with the standards for reporting qualitative research (SRQR, O’Brien et al. [ 24 ]).

Table 1 shows the professional background and country of the interviewees and survey samples. The survey was opened by 857 persons, 212 (25%) of whom answered the questions included in this study. The expert sub-group of teachers who answered the questions on the current status of clinical reasoning teaching and assessment encompassed 45 individuals.

Content of a clinical reasoning curriculum for students

The survey results show that “Gathering, interpreting, and synthesizing patient information”, is currently most extensively taught, while “Theories of clinical reasoning” are rarely taught (see Table  2 ). In accordance with these findings, “Gathering, interpreting, and synthesizing patient information” received the highest mean importance rating for a clinical reasoning curriculum while “Theories of clinical reasoning” received the lowest importance rating. Full results can be found in Table 9 in the Appendix .

Teachers and students differed significantly in their importance ratings of two content areas, “Gathering, interpreting, and synthesizing patient information” ( t (148.32) = 4.294, p  < 0.001, d  = 0.609) and “Developing a problem formulation/hypothesis” ( t (202) = 4.006, p  < 0.001, d  = 0.561), with teachers assigning greater importance to both of these content areas.

The results from the interviews are in line with those from the survey. Details can be found in Table 12 in the Appendix .

Clinical reasoning teaching methods

The survey participants reported that, most often, case-based learning is currently applied in the teaching of clinical reasoning (CR). This format was also rated as most important for teaching CR (see Table  3 ). Full results can be found in Table 10 in the Appendix .

Teachers and students differed significantly in their importance ratings of Team-based learning ( t (202) = 3.079, p  = 0.002, d  = 0.431), with teachers assigning greater importance to this teaching format.

Overall, the interviewees provided very similar judgements to the survey participants. Next to the teaching formats shown in Table  3 , some of them would employ blended learning, and clinical teaching formats such as bedside teaching and internships were also mentioned. Details can be found in the Appendix in Table 13. In addition to the importance of each individual teaching format, it was also argued that all of the formats can be useful because they all are meant to reach different objectives and that there is not one single best format for teaching CR.

Start of clinical reasoning teaching in curricula

Most teachers (52.5%) reported that currently, the teaching of clinical reasoning (CR) starts in the first year of study. Most often (46.4%) the participants also chose the first study year as the optimal year for starting the teaching CR. In accordance with the survey results, the interviewees also advocated for an early start of the teaching of CR. Some interview participants who advocated for a later start of CR teaching suggested that the students first need a solid knowledge base and that once the clinical/practical education starts, explicit teaching of CR should begin.

Assessment of clinical reasoning

The survey results suggest that currently written tests or clinical examinations are most often used, while Virtual Patients are used least often (see Table  4 ). Despite written tests being the most common current assessment format, they received the lowest importance rating for a future longitudinal CR curriculum. Full results can be found in Table 11 in the Appendix .

Teachers and students differed significantly in their importance ratings of clinical examinations ( t (161.81) = 2.854, p  = 0.005, d  = 0.413) and workplace-based assessments ( t (185) = 2.640, p = 0.009, d  = 0.386) with teachers assigning greater importance to both of these assessment formats.

The interviewees also placed importance on all assessment methods but found it difficult to assess CR with written assessment methods. The students seemed to associate clinical examinations more with practical skills than with CR. Details can be found in the Appendix in Table 14. Two of the interview participants mentioned that CR is currently not assessed at their institutions, and one person mentioned that students are asked to self-reflect on their interactions with patients and on potential improvements.

Train-the-trainer course

The following sections highlight the results from the needs analysis regarding a train-the-trainer (TTT-) course. The questions presented here were posed only to the teachers.

Most survey participants reported that there is currently no TTT- course on clinical reasoning at their institution but that they think such a course is necessary (see Table  5 ). The same was also true for the interviewees (no TTT- course on clinical reasoning existing but need for one).

In the interviews, 22 participants (78.6%) answered that a TTT-course is necessary for healthcare educators, two participants answered that no such course was necessary, and two other participants were undecided about its necessity. At none of the institutions represented by the interviewees, a TTT-course for teaching clinical reasoning exists.

When asked what the best format for a clinical reasoning TTT- course would be (single answer question), the majority of the survey participants favored a blended learning / flipped classroom approach, a combination of e-learning and face-to-face meetings. (see Table  6 ).

In the survey comments it was noted that blended-learning encompasses the benefits of both self-directed learning and discussion/learning from others. It would further allow teachers to gather knowledge about CR first in an online learning phase where they can take the time they need before coming to a face-to-face meeting.

The interviewees also found a blended-learning approach particularly suitable for a TTT-course. An e-learning course only was seen as more critical because teachers may lack motivation to participate in an online-only setting, while a one-time face-to-face meeting would not provide enough time. In some interviews, it was emphasized that teachers should experience themselves what they are supposed to teach to the students and also that the trainers for the teachers need to have solid education and knowledge on clinical reasoning.

Table 7 shows the importance ratings of potential content of a TTT-course generated from the survey. To elaborate on this content, comments by the interviewees were added. On average, all content was seen as (somewhat) important with teaching methods on the ward and/or clinic receiving the highest ratings. Some interviewees also mentioned the importance of interprofessional aspects and interdisciplinary understanding of CR. In the survey comments, some participants further expressed their interest in such a course.

Finally, the interviewees were asked about the ideal length of a clinical reasoning TTT-course. The answers varied greatly from 2–3 hours to a two-year educational program, with a tendency toward 1–2 days. Several interviewees commented that the time teachers are able to spend on a TTT-course is limited. This should be considered in the planning of such a course to make participation feasible for teachers.

In this study, we investigated the current status of and suggestions for teaching and assessment of clinical reasoning (CR) in a longitudinal curriculum as well as suggestions for a train-the-trainer (TTT-) course for CR. Teachers and students were invited to participate in online-surveys as well as semi-structured interviews to derive answers to our research questions. Regarding the contents of a CR curriculum for students, the results of the surveys and interviews were comparable and favoured content such as gathering, interpreting, and synthesizing patient information, generating differential diagnoses, and developing a diagnostic and a treatment plan. In the interviews, high importance was additionally placed on collaborative and interprofessional aspects of CR. Case-based learning and simulations were seen as the most useful methods for teaching CR, and clinical and oral examinations were favoured for the assessment of CR. The preferred format for a TTT-course was blended learning. In terms of course content, teaching and assessment methods for CR were emphasized. In addition to research from the North American region [ 11 ], this study provides results from predominantly European countries that support the existing findings.

Content of a clinical reasoning curriculum

Our results revealed that there are still aspects of clinical reasoning (CR), such as “Errors in the clinical reasoning process and strategies to avoid them” or “Interprofessional aspects of CR” that are rarely taught despite their high importance, corroborating the findings of Kononowicz et al. [ 10 ]. According to the interviewees, students should have basic knowledge of CR before they are taught about errors in the CR process and strategies to avoid them. The lack of teaching of errors in CR may also stem from a lack of institutional culture regarding how to manage failures in a constructive way (e.g. [ 16 , 25 ]), making it difficult to explicitly address errors and strategies to avoid them. Although highly relevant in the everyday practice of healthcare professions and underpinned by CR theoretical frameworks (e.g., distributed cognition [ 26 ]), interprofessional and collaborative aspects of CR are currently rarely considered in the teaching of CR. The interviews suggested that hierarchical distance and cultural barriers may contribute to this finding. Sudacka et al. [ 16 ] also reported cultural barriers as one reason for a lack of CR teaching. Generally, the interviewees seemed to place greater importance on interprofessional and collaborative aspects than did the survey-participants This may have been due to differences in the professions represented in the two modalities (e.g., a greater percentage of nurses among the interview participants, who tend to define CR more broadly than physicians [ 5 ]).

“Self-reflection on clinical reasoning performance and strategies for future improvement”, “Developing a problem formulation/hypothesis” and “Aspects of patient-participation in CR” were rated as important but are currently rarely taught, a finding not previously reported. The aspect “Self-reflection on clinical reasoning performance and strategies for future improvement”, received high importance ratings, but only 25% of the survey-participants answered that it is currently taught to a great extent. The interviewees agreed that self-reflection is important and added that ideally, it should be guided by specific questions. Ogdie et al. [ 27 ] found that reflective writing exercises helped students identify errors in their reasoning and biases that contributed to these errors.

“Gathering, interpreting, and synthesizing patient information” and “Developing a problem formulation/hypothesis” were rated significantly more important by teachers than by students. It appears that students may be less aware yet of the importance of gathering, interpreting, and synthesizing patient information in the clinical reasoning process. There was some indication in the interviews that the students may not have had enough experience yet with “Developing a problem formulation/hypothesis” or associate this aspect with research, possibly contributing to the observed difference.

Overall, our results on the contents of a CR curriculum suggest that all content is important and should be included in a CR curriculum, starting with basic theoretical knowledge and data gathering to more advanced aspects such as errors in CR and collaboration. Two other recent surveys conducted in the United States among pre-clerkship clinical skills course directors [ 12 ] and members of clerkship organizations [ 13 ] came to similar conclusions regarding the inclusion of clinical reasoning content at various stages of medical curricula. How to fit the content into already dense study programs, however, can still be a challenge [ 16 ].

In addition to case-based learning and clinical teaching, human simulated patients and Team-based learning also received high importance ratings for teaching clinical reasoning (CR), a finding not previously reported. Lectures, on the other hand, are seen as the least important to teach CR (see also Kononowicz et al. [ 10 ]), as they mainly deliver factual knowledge according to the interviewees. High-fidelity simulations (mannequins) and Virtual Patients (VPs) are rarely used to teach CR at the moment and are rated less important compared to other teaching formats. Some interviewees see high-fidelity simulations as more useful for teaching practical skills. The lower importance rating of VPs was surprising given that this format is case-based, provides a safe environment for learning, and is described in the literature as a well-suited tool for teaching CR [ 28 , 29 ]. Considering that VPs seemed to be used less often at the institutions involved in this study, the lack of experience with this format may have led to this result.

Teachers rated Team-based learning as significantly more important for teaching clinical reasoning than students. In the interviews, many students seemed not to be familiar with Team-based learning, possibly explaining the lower ratings the students gave this format in the survey.

Taken together, our results suggest that there is not one best format for teaching all aspects of clinical reasoning but rather that the use of all teaching formats is justified depending on the specific content to be taught and goals to be achieved. However, there was agreement that a safe learning environment where no patients can be harmed is preferred for teaching clinical reasoning, and that discussions should be possible.

There was wide agreement that clinical reasoning (CR) teaching should start in the first year of study in the curriculum. However, a few participants of this study argued that students first need to develop some general knowledge before CR is taught. Rencic et al. [ 11 ] reported that according to internal medicine clerkship directors, CR should be taught throughout all years of medical school, with a particular focus during the clinical teaching years. A similar remark was made by participants in a survey among pre-clerkship clinical skills course directors by Gupta et al. [ 12 ] where the current structure of some curricula (e.g. late introduction of the pathophysiology) was regarded as a barrier to introducing CR from the first year of study on [ 12 ].

Our results show that the most important format for assessing clinical reasoning (CR) that is also currently used to the greatest extent are clinical examinations (e.g. OSCE), consistent with Kononowicz et al. [ 10 ]. The interviewees emphasized that CR should ideally be assessed in a conversation or discussion where the learners can explain their reasoning. Given this argument, all assessment formats enabling a conversation are suitable for assessing CR. This is reflected in our survey results, where assessment formats that allow for a discussion with the learner received the most favourable importance ratings, including oral examinations. In agreement with Kononowicz et al. [ 10 ], we also found that written tests are currently used most often to assess CR but are rated as least important and suitable only for the assessment of some aspects of CR. Daniel et al. [ 3 ] argued that written exams such as MCQs, where correct answers have to be selected from a list of choices, are not the best representation of real practical CR ability. Thus, there still seems to be potential for improvement in the way CR is assessed.

Teachers rated clinical examinations and workplace-based assessments significantly higher than students. Based on the interviews, the students seemed to associate clinical examinations such as OSCEs more with a focus on practical skills than CR, potentially explaining their lower ratings of this format.

What a clinical reasoning train-the-trainer course should look like

Our results show a clear need for a clinical reasoning (CR) train-the-trainer course (see also Singh et al. [ 15 ]), which currently does not exist at most institutions represented in this study, corroborating findings by Kononowicz et al. [ 10 ]. A lack of adequately trained teachers is a common barrier to the introduction of CR content into curricula [ 12 , 16 ]. According to our results such a course should follow a blended learning/flipped classroom approach or consist of a series of face-to-face meetings. A blended-learning course would combine the benefits of both self-directed learning and the possibility for trainers to discuss with and learn from their peers, which could also increase their motivation to participate in such a course. An e-learning only course or a one-time face-to-face meeting were considered insufficient. The contents “Clinical reasoning strategies” and “Common errors in the clinical reasoning process” were given greater importance for the trainer-curriculum than for the students-curriculum, possibly reflecting higher expectations of trainers as “CR experts” compared with students. There was some agreement in the interviews that ideally, the course should not be too time-consuming, with participants tending towards an overall duration of 1–2 days, considering that most teachers usually have many duties and may not be able or willing to attend the course if it were too long. Lack of time was also identified as a barrier to attending teacher training [ 12 , 13 , 16 ].

Strengths and limitations

The strengths of this study include its international and interprofessional participants. Furthermore, we explicitly included teachers and students as target groups in the same study, which enables a comparison of different perspectives. Members of the target groups not only participated in a survey but were also interviewed to gain in-depth knowledge. A distinct strength of this study is its mixed-methods design. The two data collection methods employed in parallel provided convergent results, with responses from the web survey indicating global needs and semi-structured interviews contributing to a deeper understanding of the stakeholder groups’ nuanced expectations and perspectives on CR education.

This study is limited in that most answers came from physicians, making the results potentially less generalizable to other professions. Furthermore, there were participants from a great variety of countries, with some countries overrepresented. Because of the way the survey-invitations were distributed, the exact number of recipients is unknown, making it impossible to compute an exact response rate. Also, the response rate of the survey was rather low for individuals who opened the survey. Because the survey was anonymous, it cannot completely be ruled out that some individuals participated in both interviews and survey. Finally, there could have been some language issues in the interview analysis, as the data were translated to English at the local partner institutions before they were submitted for further analysis.

Our study provides evidence of an existing need for explicit clinical reasoning (CR) longitudinal teaching and dedicated CR teacher training. More specifically, there are aspects of CR that are rarely taught that our participants believe should be given priority, such as self-reflection on clinical reasoning performance and strategies for future improvement and aspects of patient participation in CR that have not been previously reported. Case-based learning and clinical teaching methods were again identified as the most important formats for teaching CR, while lectures were considered relevant only for certain aspects of CR. To assess CR, students should have to explain their reasoning, and assessment formats should be chosen accordingly. There was also still a clear need for a CR train-the-trainer course. In addition to existing research, our results show that such a course should ideally have a blended-learning format and should not be too time-consuming. The most important contents of the train-the-trainer course were confirmed to be teaching methods, CR strategies, and strategies to avoid errors in the CR process. Examples exist for what a longitudinal CR curriculum for students and a corresponding train-the-trainer course could look like and how these components could be integrated into existing curricula (e.g. DID-ACT curriculum [ 20 ], https://did-act.eu/integration-guide/ or the described curriculum of Singh et al. [ 15 ]). Further research should focus on whether and to what extent the intended outcomes of such a curriculum are actually reached, including the potential impact on patient care.

Availability of data and materials

All materials described in this manuscript generated during the current study are available from the corresponding author on reasonable request without breaching participant confidentiality.

Connor DM, Durning SJ, Rencic JJ. Clinical reasoning as a core competency. Acad Med. 2020;95:1166–71.

Article   Google Scholar  

Young M, Szulewski A, Anderson R, Gomez-Garibello C, Thoma B, Monteiro S. Clinical reasoning in CanMEDS 2025. Can Med Educ J. 2023;14:58–62.

Google Scholar  

Daniel M, Rencic J, Durning SJ, Holmboe E, Santen SA, Lang V, Gruppen LD. Clinical reasoning assessment methods: a scoping review and practical guidance. Acad Med. 2019;94:902–12.

Scott IA. Errors in clinical reasoning: causes and remedial strategies. BMJ. 2009. https://doi.org/10.1136/bmj.b1860 .

Huesmann L, Sudacka M, Durning SJ, Georg C, Huwendiek S, Kononowicz AA, Schlegel C, Hege I. Clinical reasoning: what do nurses, physicians, and students reason about. J Interprof Care. 2023;37:990–8.

Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010;44:94–100.

Berner E, Graber M. Overconfidence as a cause of diagnostic error in medicine. Am J Med. 2008;121:2–23.

Cooper N, Da Silva AL, Powell S. Teaching clinical reasoning. In: Cooper N, Frain J, editors. ABC of clinical reasoning. 1st ed. Hoboken, NJ: John Wiley & Sons Ltd; 2016. p. 44–50.

Elvén M, Welin E, Wiegleb Edström D, Petreski T, Szopa M, Durning SJ, Edelbring S. Clinical reasoning curricula in health professions education: a scoping review. J Med Educ Curric Dev. 2023. https://doi.org/10.1177/23821205231209093 .

Kononowicz AA, Hege I, Edelbring S, Sobocan M, Huwendiek S, Durning SJ. The need for longitudinal clinical reasoning teaching and assessment: results of an international survey. Med Teach. 2020;42:457–62.

Rencic J, Trowbridge RL, Fagan M, Szauter K, Durning SJ. Clinical reasoning education at US medical schools: results from a national survey of internal medicine clerkship directors. J Gen Intern Med. 2017;32:1242–6.

Gupta S, Jackson JM, Appel JL, Ovitsh RK, Oza SK, Pinto-Powell R, Chow CJ, Roussel D. Perspectives on the current state of pre-clerkship clinical reasoning instruction in United States medical schools: a survey of clinical skills course directors. Diagnosis. 2021;9:59–68.

Gold JG, Knight CL, Christner JG, Mooney CE, Manthey DE, Lang VJ. Clinical reasoning education in the clerkship years: a cross-disciplinary national needs assessment. PLoS One. 2022;17:e0273250.

Cooper N, Bartlett M, Gay S, Hammond A, Lillicrap M, Matthan J, Singh M. UK Clinical Reasoning in Medical Education (CReME) consensus statement group. Consensus statement on the content of clinical reasoning curricula in undergraduate medical education. Med Teach. 2021;43:152–9.

Singh M, Collins L, Farrington R, Jones M, Thampy H, Watson P, Grundy J. From principles to practice: embedding clinical reasoning as a longitudinal curriculum theme in a medical school programme. Diagnosis. 2021;9:184–94.

Sudacka M, Adler M, Durning SJ, Edelbring S, Frankowska A, Hartmann D, Hege I, Huwendiek S, Sobočan M, Thiessen N, Wagner FL, Kononowicz AA. Why is it so difficult to implement a longitudinal clinical reasoning curriculum? A multicenter interview study on the barriers perceived by European health professions educators. BMC Med Educ. 2021. https://doi.org/10.1186/s12909-021-02960-w .

Hingley A, Kavaliova A, Montgomery J, O’Barr G. Mixed methods designs. In: Creswell JW, editor. Educational research: planning, conducting, and evaluating quantitative and qualitative research. 4th ed. Boston: Pearson; 2012. p. 534–75.

Merriam SB. Qualitative research and case study applications in education. In: from" case study research in education.". Sansome St. Revised and Expanded. San Francisco, CA: Jossey-Bass Publishers; 1998.

Cleland J, MacLeod A, Ellaway RH. The curious case of case study research. Med Educ. 2021;55:1131–41.

Hege I, Adler M, Donath D, Durning SJ, Edelbring S, Elvén M, Wiegleb Edström D. Developing a European longitudinal and interprofessional curriculum for clinical reasoning. Diagnosis. 2023;10:218–24.

Collins D. Pretesting survey instruments: an overview of cognitive methods. Qual Life Res. 2003;12:229–38.

Liu M, Wronski L. Examining completion rates in web surveys via over 25,000 real-world surveys. Soc Sci Comput Rev. 2018;36:116–24.

Mayring P, Fenzl T. Qualitative inhaltsanalyse. In: Baur N, Blasius J, editors. Handbuch methoden der empirischen Sozialforschung. Wiesbaden: Springer VS; 2019. p. 633–48.

Chapter   Google Scholar  

O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89:1245–51.

Edmondson AC. Learning from failure in health care: frequent opportunities, pervasive barriers. BMJ Qual Saf. 2004;13 Suppl 2:ii3-ii9.

Merkebu J, Battistone M, McMains K, McOwen K, Witkop C, Konopasky A, Durning SJ. Situativity: a family of social cognitive theories for understanding clinical reasoning and diagnostic error. Diagnosis. 2020;7:169–76.

Ogdie AR, Reilly JB, Pang WG, Keddem S, Barg FK, Von Feldt JM, Myers JS. Seen through their eyes: residents’ reflections on the cognitive and contextual components of diagnostic errors in medicine. Acad Med. 2012;87:1361–7.

Berman NB, Durning SJ, Fischer MR, Huwendiek S, Triola MM. The role for virtual patients in the future of medical education. Acad Med. 2016;91:1217–22.

Plackett R, Kassianos AP, Mylan S, Kambouri M, Raine R, Sheringham J. The effectiveness of using virtual patient educational tools to improve medical students’ clinical reasoning skills: a systematic review. BMC Med Educ. 2022. https://doi.org/10.1186/s12909-022-03410-x .

Download references

Acknowledgements

We want to thank all participants of the interviews and survey who took their time to contribute to this study despite the ongoing pandemic in 2020. Furthermore, we thank the members of the DID-ACT project team who supported collection and analysis of survey and interview data.

The views expressed herein are those of the authors and not necessarily those of the Department of Defense, the Uniformed Services University or other Federal Agencies.

This study was partially supported by the Erasmus + Knowledge Alliance DID-ACT (612454-EPP-1–2019-1-DE-EPPKA2-KA).

Author information

Authors and affiliations.

Institute for Medical Education, Department for Assessment and Evaluation, University of Bern, Bern, Switzerland

F. L Wagner & S. Huwendiek

Center of Innovative Medical Education, Department of Medical Education, Jagiellonian University, Kraków, Poland

Faculty of Medicine, Department of Bioinformatics and Telemedicine, Jagiellonian University, Kraków, Poland

A. A Kononowicz

School of Health, Care and Social Welfare, Mälardalen University, Västerås, Sweden

Faculty of Medicine and Health, School of Health Sciences, Örebro University, Örebro, Sweden

Uniformed Services University of the Health Sciences, Bethesda, MD, USA

S. J Durning

Institute of Medical Education, University Hospital, LMU Munich, Munich, Germany

You can also search for this author in PubMed   Google Scholar

Contributions

FW and SH wrote the first draft of the manuscript. All authors critically revised the manu-script in several rounds and approved the final manuscript.

Corresponding author

Correspondence to F. L Wagner .

Ethics declarations

This type of study was regarded as exempt from formal ethical approval according to the regulations of the Bern Ethics Committee (‘Kantonale Ethikkommission Bern’, decision Req-2020–00074). All participants voluntarily participated and provided informed consent before taking part in this study.

Consent for publication

All authors consent to publication of this manuscript.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Wagner, F., Sudacka, M., Kononowicz, A. et al. Current status and ongoing needs for the teaching and assessment of clinical reasoning – an international mixed-methods study from the students` and teachers` perspective. BMC Med Educ 24 , 622 (2024). https://doi.org/10.1186/s12909-024-05518-8

Download citation

Received : 16 January 2024

Accepted : 06 May 2024

Published : 05 June 2024

DOI : https://doi.org/10.1186/s12909-024-05518-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical reasoning

BMC Medical Education

ISSN: 1472-6920

clinical research method

Sign in to your site’s geographic location:

Analyzing the fda’s approach to diversity in clinical trials.

Diversity in CT

Certain groups remain underrepresented in clinical research. The FDA has taken steps to address inequities in clinical trials by releasing guidance to sponsors and other regulated entities and included steps on how to address gaps in diversity. Diversity in clinical trials is critical because it helps to ensure that new drugs and medical devices are safe and effective for all populations. 

How did we get here? In 2016, the FDA released their first guidance related to increasing diversity in clinical trials. At this time, sites were only required to ask, “Do you identify as Hispanic or Latino?” the second question being, “Which of the following five racial designations best describes you?” While this is a great start, this still leaves a lot of room to collect much more valuable diversity metrics and data points. 

In November 2020, the FDA released a guidance called ‘ Enhancing the Diversity of Clinical Trial Populations — Eligibility Criteria, Enrollment Practices, and Trial Designs ’ . This document provided approaches on how to increase trial enrollment of underrepresented populations. This guidance provides the foundation for expanding access to clinical trials. 

In April 2022, the FDA also issued a draft guidance called ‘ Diversity Plans to Improve Enrollment of Participants from Underrepresented Racial and Ethnic Populations in Clinical Trials ’ . This guidance provides recommendations to sponsors developing medical products on the approach for developing a Race and Ethnicity Diversity Plan for their studies. 

First, let’s dive into the November 2020 guidance. In this document, ‘FDA recognizes that some eligibility criteria have become commonly accepted over time or used as a template across trials, sometimes excluding certain populations from trials without strong clinical or scientific justification (e.g., older adults, those at the extremes of the weight range, those with malignancies or certain infections such as HIV, and children). Further, it goes on stating that ‘ Unnecessary exclusion of such participants may lead to a failure to discover important safety information about use of the investigational drug in patients who will take the drug after approval. Therefore, broadening eligibility criteria in later stages of drug development for the phase 3 population increases the ability to understand the therapy’s benefit-risk profile across the patient population likely to use the drug in clinical practice.’

There are three key objectives of the guidance: 

Broadening eligibility criteria to increase diversity in enrollment  

  • Some patients may be unable to participate without reasonable accommodations (e.g., patients with physical and/or mental disabilities, non-English speakers, patients who work and require evening or weekend hours, and some older adult patients with limited access to transportation).
  • Developing eligibility criteria and improving trial recruitment so that the participants enrolled in trials will better reflect the population most likely to use the drug, if the drug is approved, while maintaining safety and effectiveness standards
  • # of visits
  • Flexibility in visit windows
  • Detects and measures a physical or chemical characteristic (e.g., blood pressure).
  • Converts this measurement into an electronic signal.
  • Often transmits the recorded data to remote databases (e.g., ambulatory blood pressure monitors).

Study design and conduct considerations for improving enrollment

  • Ensure a representative sample of the population – consider eliminating or modifying the criterion. For example, if there are unreasonable risks to participants with advanced heart failure, but enrollment of those with milder disease would be appropriate, the exclusion criteria should specifically define the population of heart failure participants that should be excluded.
  • Change or remove exclusion criteria from phase 2 studies, if possible.
  • Children (where appropriate)
  • Racial and ethnic minorities (Why? Because analyzing data on race and ethnicity may assist in identifying population-specific signals).
  • Older Adults, children, patients with disabilities or cognitive impaired individuals
  • Patients in rural areas
  • Patients who would suffer financially if participating in a trial (missing shifts at work or having to pay for childcare while visiting a site)
  • Additionally, mistrust of clinical research among certain populations also impacts enrollment

Discussing methods for broadening eligibility criteria to clinical trials of drugs intended to treat rare diseases or conditions.

  • Early engagement with patient advocacy groups, experts and patients with the disease to solicit feedback regarding trial design to ensure support from relevant stakeholders – most importantly potential trial participants.
  • Where medically appropriate, consider re-enrollment of participants from early- to later-phase trials.
  • Consider open-label extension studies with broader inclusion criteria after early-phase studies to encourage participation by all participants (including those who have received the placebo during an early phase trial) – and to ultimately, provide access to the investigational treatment to all participants.

Broadening eligibility criteria and using more inclusive enrollment practices as outlined in the 2020 guidance can significantly enhance the quality of studies. This will ultimately lead to: 

  • Better Representation: The study population more closely reflects the real-world patients who will use the drug if approved.
  • Improved Safety Detection: A wider range of participants allows for the discovery of safety information that might be missed in a more narrow field of participants.<
  • Informed Benefit-Risk Analysis: By including a more diverse population, researchers gain a clearer understanding of the therapy’s benefits and risks throughout later development stages (phase 3). This knowledge helps determine if the drug is truly suitable for the intended patient population in clinical practice.

The April 2022 guidance focused more on sponsor initiatives to enhance diversity in clinical trials. Sponsors play a pivotal role in developing and submitting comprehensive Diversity Plans (DP), a cornerstone for fostering inclusivity and enhancing the effectiveness and safety of medical products. The FDA expects sponsors to develop a comprehensive strategy for each medical product to enhance enrollment of underrepresented racial and ethnic groups in clinical studies, as described below through various means.

Understanding the Guidelines:

As per the information from the April 2022 guidance, sponsors are mandated to submit Diversity Plans for all medical products, especially during critical phases such as Investigational New Drug (IND) and Investigational Device Exemption (IDE) applications.

  • For IND applications, sponsors must submit the DP as early as possible during drug development or no later than when seeking feedback regarding pivotal trials.
  • For IDE applications, the DP should be included as part of the investigational plan submitted.

Key Components of the Diversity Plan:

The Diversity Plan should define enrollment goals for underrepresented racial and ethnic participants, emphasizing early integration into clinical development processes. 

It must review pertinent data indicating any differential safety or effectiveness associated with race or ethnicity concerning the investigational product (IP) itself.For drug development, this means reviewing and analyzing pharmacokinetic (PK), pharmacodynamic (PD), and pharmacogenomic data. Device development necessitates examining factors that may impact device performance across diverse populations, such as phenotypic, anatomical, or biological variations.

Additionally, the plan should describe with great detail strategies for assessing race and ethnicity alongside other covariates. It should facilitate exposure-response analyses, inform dosing regimens for drugs, and evaluate the impact of factors like skin pigmentation on device performance.

  • The DP should address continuous monitoring and pediatric studies.In order to ensure ongoing vigilance, sponsors must incorporate mechanisms for continuous data monitoring throughout the product lifecycle to identify any disparities in safety or effectiveness associated with race and ethnicity. Moreover, the plan should encompass pediatric studies integral to the overall product development process.

In setting enrollment goals and devising strategies, sponsors are encouraged to leverage diverse data resources, include published literature, and collaborate with stakeholders to enhance inclusivity and efficacy in clinical trials.

In conclusion, the formulation and submission of a robust Diversity Plan signifies a commitment to equitable healthcare and patient-centric research practices. By prioritizing the inclusion of underrepresented populations and diligently monitoring for disparities, sponsors not only adhere to regulatory mandates but also foster a culture of diversity and inclusivity essential for advancing medical science and improving patient outcomes.

Improving patient diversity with technology

Improving Patient Diversity in Clinical Trials with Technology

Improving patient diversity in clinical trials has been a major initiative by the FDA as evidenced by the 2020 Enhancing the Diversity of Clinical Trial Populations guidance as well as the 2022 Diversity Plans guidance. Many forward-thinking site operators are looking for new ways to access potential study participants that better reflect the population most likely...

clinical research method

CRIO Sites Have Twice the Average Patient Diversity

The new FDA guidance to enhance diversity in clinical research subjects has brought ethnic and racial diversity to the center stage for sponsors.  In this document, the FDA encourages sponsors to develop a Race and Ethnic Diversity Plan to incorporate into each protocol. More specifically, the FDA suggests that to achieve these diversity targets, sponsors...

clinical research method

CRIO Sites Out-enroll Non-CRIO Sites by 39%

Enrollment is critical to a trial, and site performance is a major – if not the most important – driver of that. We’ve always known that our full stack system enables sites to increase efficiency and enhance data quality, but do sites who use CRIO actually enroll more? We now have data to prove that...

Get articles delivered to your inbox, every week

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Clinical Research Methodology I: Introduction to Randomized Trials

With increasing initiatives to improve the effectiveness and safety of patient care, there is a growing emphasis on evidence-based medicine and incorporation of high-quality evidence into clinical practice. The cornerstone of evidence-based medicine is the randomized controlled trial (RCT). The World Health Organization defines a clinical trial as “any research study that prospectively assigns human participants or groups of humans to one or more health-related interventions to evaluate the effects on health outcomes.” 1 Randomization refers to the method of assignment of the intervention or comparison(s). Fewer than 10% of clinical studies reported in surgical journals are RCTs, 2 – 4 and treatments in surgery are only half as likely to be based on RCTs as treatments in internal medicine. 5

Multiple factors impede surgeons performing definitive RCTs, including the inability to blind health care providers and patients, small sample sizes, variations in procedural competence, and strong surgeon or patient preferences. 5 – 8 Not all questions can be addressed in an RCT; Solomon and colleagues 8 estimated that only 40% of treatment questions involving surgical procedures are amenable to evaluation by an RCT, even in an ideal clinical setting. In surgical oncology, trials evaluating survival after operations for a rare malignancy can require an unreasonably large sample size. Pawlik and colleagues 9 estimated that only 0.3% of patients with pancreatic adenocarcinoma could benefit from pancreaticoduodenectomy with extended lymphadenectomy; a randomized trial of 202,000 patients per arm would be necessary to detect a difference in survival.

These reasons should not dissuade surgeons from performing RCTs. Even for rare diseases, randomized trials remain the best method to obtain unbiased estimates of treatment effect. 10 , 11 Rigorously conducted RCTs minimize bias by controlling for known and unknown factors (confounders) that affect outcomes and distort the apparent treatment effect. Observational studies, including those with the most sophisticated design and analysis, 12 , 13 can control only for known confounders and might not adequately control for those. Many surgical and medical interventions recommended based on observational studies have later been demonstrated to be ineffective or even harmful. These have included blood transfusions to maintain a hemoglobin >10 mg/dL in critically ill patients, 14 , 15 bone marrow transplantation for breast cancer, 16 – 19 and extracranial-intracranial bypass for carotid artery stenosis. 20 , 21

Another major reason for RCTs to be of interest to surgeons is that patients enrolled in trials can have improved short-term outcomes, even if the intervention is ineffective. 22 – 25 Potential sources of this benefit include enrollment of lower-risk patients, use of standardized protocols and improved supportive care, and greater effort to prevent and address treatment hazards. Different outcomes can also be observed in trial participants because of either the Hawthorne or placebo effect, both of which can distort the apparent treatment effect and threaten the validity of the trial. The Hawthorne effect occurs when changes in clinicians’ or patients’ behavior, because of being observed, results in improved outcomes. For example, a prospective observational study evaluating operating room efficiency after an intervention can demonstrate improvement over historic performance, in part because the staff is aware of being observed rather than as a result of the intervention. The placebo effect occurs when the patient derives benefit not from the treatment itself, but from the patient’s expectations for benefit. In a randomized trial of arthroscopic surgery versus sham surgery for osteoarthritis of the knee, the placebo procedure had equivalent results to debridement and lavage, despite lack of any therapeutic intervention. 26

Despite the advantages of well-conducted RCTs, poorly conducted trials or inadequately reported results can yield misleading information. 27 , 28 Recently, Chang and colleagues 29 demonstrated the continued paucity of high-level evidence in surgical journals and called for articles on clinical research methodology to educate surgeons. The purpose of this article is to serve as an introduction to RCTs, focusing on procedures for assigning treatment groups that serve to minimize bias and error in estimating treatment effects. Common threats to validity and potential solutions to difficulties in randomizing patients in surgical trials will also be discussed.

OBSERVATIONAL COHORT STUDIES

RCTs are the gold standard for evaluating the effectiveness of an intervention. Many therapies have historically been evaluated in surgery using observational cohort studies where groups of patients are followed for a period of time, either consecutively or concurrently. These studies can be conducted retrospectively or prospectively. The fundamental criticism of observational cohort studies is that confounding can result in biased estimates of treatment effect. 30 , 31

A confounder is a known or unknown factor that is related to the variable of interest (eg, an intervention) and is a cause of the outcomes. For example, suppose a study finds that patients undergoing a procedure by surgeon A have increased mortality when compared with surgeon B. The outcomes difference might not be a result of inferior operative technique of surgeon A, but rather confounders, such as patients’ comorbidities or severity of disease (eg, if surgeon A is referred the more complicated patients).

Observational studies cannot account for unknown confounders. Novel statistical methods can improve estimates of treatment effect because of known and unknown confounders in nonrandomized trials, but are still subject to limitations. 32 , 33 Traditionally, nonrandomized or observational studies adjust for known confounders in the statistical analysis. Adjustment refers to the mathematic modeling of the relationship between one or more predictor variables and the outcomes to estimate the isolated effect of each variable. Even with advanced statistical analyses, such as propensity scoring, these models cannot completely adjust for all of the confounders. 12 Although observational cohort studies have a role in clinical research, such as in answering questions about harm, well-designed RCTs are the gold standard for evaluating an intervention because they minimize bias from known and unknown confounders.

OVERVIEW OF RANDOMIZATION AND ALLOCATION CONCEALMENT

Properly designed RCTs minimize imbalances in baseline characteristics between groups and could distort the apparent effect of the difference in treatment on patient outcomes. The randomization procedure used to assign treatment and prevent prediction of treatment assignment resulting in allocation of intervention bias is especially important. With random assignment, each patient has the same chance of being assigned to a specific treatment group. Equal (1:1) allocation results in the same likelihood of assignment to either group (50:50) and the greatest power to detect a difference in outcomes between the groups. Unequal or weighted randomization allows the investigators to maintain a balance between groups in their baseline characteristics, but allocate more patients to one group (eg, with a 2:1 allocation, two-thirds of patients will be assigned the first treatment and one-third to the second). Unequal randomization can be used to decrease costs when one treatment is considerably more expensive to provide than the other. 34 , 35

Valid methods of randomization include flipping a coin, rolling a die, using a table of random numbers, or running a computerized random allocation generator (eg, http://www.random.org ). Randomization should be performed in such a way that the investigator should not be able to anticipate treatment group. 36 – 38 Not all published trials reported as “randomized” are truly randomized. In fact, only 33% to 58% of published surgical trials describe a valid randomization process where the treatment assignment cannot be predicted. 39 , 40 For example, in a randomized trial evaluating screening mammography, participants were assigned based on which day of the month they were born (patients born between the 1st and 10th of the month or 21st and 31st were assigned mammography and patients born between the 11th and the 20th were assigned the control). 41 Results were questioned because anticipation of treatment group can inadvertently influence whether a patient is considered to meet eligibility criteria or how much effort is devoted to securing informed consent 42 and so can cause selection biases with baseline differences between groups that can influence the results. Other “pseudorandom” or “quasirandom” schemes include use of medical record number or date of enrollment.

Allocation concealment prevents the investigator or trial participant from consciously or subconsciously influencing the treatment assignment and causing selection bias. Allocation concealment, which occurs before randomization, should not be confused with blinding (also known as masking), which occurs after randomization. Where a valid randomization scheme has been used, allocation can still be inadequately concealed (eg, use of translucent envelopes containing treatment assignments). Methods of allocation concealment include use of sequentially numbered, sealed, opaque envelopes or allocation by a central office. Allocation concealment is always possible, although blinding is not. 38 Yet, between 1999 and 2003, only 29% of published surgical trials reported allocation concealment. 40 Although allocation concealment can be used but not noted in published reports, inadequate allocation concealment appears to be a large and generally unrecognized source of bias. Schulz and colleagues 43 found that treatment effect was overestimated by 41% when allocation concealment was inadequate or unclear.

VARIATIONS IN RANDOMIZATION SCHEMES

Simple randomization.

The most straightforward scheme for allocating patients is simple randomization ( Fig. 1A ), with treatment assigned using one of the methods mentioned previously (eg, computer-generated random sequence). Simple randomization can result, by chance alone, in unequal numbers in each group—the smaller the sample size, the larger the likelihood of a major imbalance in the number of patients or the baseline characteristics in each group. 44 An example of simple randomization would be the sequence of 20 random numbers generated using a computer program ( Table 1 ).

An external file that holds a picture, illustration, etc.
Object name is nihms46205f1.jpg

Randomization procedures. In this example, balls represent patients and the color represents a prognostic factor (eg, ethnicity). (A) In simple randomization, group assignment can be determined by a flip of a coin, roll of a die, random number table, or computer program. With small sample sizes, there can be an unequal number in each treatment arm or an unequal distribution of prognostic factors, or both. Note that the numbers in the treatment arms are unequal and the prognostic factor is unevenly distributed between the two. (B) In blocked randomization with uniform or equal blocks, randomization occurs in groups (blocks), and the total sample size is a multiple of the block size. In the figure, balls were randomized in blocks of six. Note that the number in each treatment arm is equal, but the prognostic factor is not equally balanced between the two. (C) In blocked randomization with varied blocks, the size of the blocks changes either systematically or randomly to avoid predictability of treatment assignment. In the figure, the first block has eight balls and the second block has four balls. Again, the number in each treatment arm is equal, but the prognostic factor is not balanced between them. (D) In stratified blocked randomization, the total sample is divided into one or more subgroups (strata) and then randomized within each stratum. In this example, the sample population is divided into black and white balls and then randomized in blocks of six. There are an equal number of balls in both arms, and the distribution of the black and white balls is balanced as well.

Examples of Randomization

Treatment could be allocated as evens receiving treatment A and odds receiving treatment B. In this case, 9 patients would receive treatment A and 11 patients would receive treatment B ( Table 1 ).

Alternatively, patients assigned to a number between 1 and 50 could receive treatment A and patients assigned to a number between 51 and 100 could receive treatment B. In this case, 6 patients would receive treatment A and 14 would receive treatment B. In small trials or in larger trials with planned interim analyses, simple randomization can result in imbalanced group numbers. Even if the groups have equal numbers, there can be important differences in baseline characteristics that would distort the apparent treatment effect.

Blocked (restricted randomization)

Simple randomization can result not only in imbalanced groups, but also chronological bias in which one treatment is predominantly assigned earlier and the other later in the trial. Chronological bias is important if outcomes changes with time, as when surgeons become more adept at the procedure under investigation or increasing referrals for the procedures changes the patient population. 45 Chronological bias results in an inability to differentiate between the effects of temporally related factors, such as surgeon experience and treatment. For these reasons, blocked (restricted) randomization schemes can be used ( Figs. 1B, 1C ).

For example, a uniform block size of 4 can be used with 1:1 allocation and two treatment arms. The two arms will never differ at any time by more than two patients, or half of the block length. There are six possible assignment orders for each block (called a permuted block) of four patients: AABB, ABAB, ABBA, BAAB, BABA, and BBAA. Although blocked randomization will maintain equal or nearly equal group sizes across time, selection bias can occur if the investigators are not blinded to block size and treatment assignment. If the first three patients in the trial received treatments A, A, and B, then the unblinded investigator might anticipate that the fourth patient will receive treatment B. The decision whether to enroll the next study candidate could be inadvertently affected, as a result, by the investigator’s treatment preference. 42 This problem can generally be avoided by randomly or systematically varying the block sizes ( Fig. 1C ).

Stratified randomization

Imbalances in prognostic factors between treatment arms can occur because of chance alone, even if randomization and allocation concealment are properly performed. Important imbalances are most likely to occur by chance in small trials or during interim analyses of large RCTs. 46 Prognostic stratification can be used to avoid such imbalances. Patients can be first categorized based on several prognostic factors into strata and then randomized within each stratum, guaranteeing no major imbalance between groups in these factors ( Fig. 1D ). For example, Fitzgibbons and colleagues 47 performed an RCT to evaluate whether watchful waiting is an acceptable alternative to tension-free repair for inguinal hernias in minimally symptomatic or asymptomatic patients. Eligible patients were stratified by center (six total), whether the hernia was primary or recurrent, and whether the hernia was unilateral or bilateral. The total number of strata for this study was 24, or the product of the number of levels of each factor (6 × 2 × 2). Once assigned to a stratum, patients were then randomized to either watchful waiting or hernia repair.

Because important prognostic factors will be balanced, stratified randomization can decrease the chance of a type I error (finding a difference between treatment arms because of chance alone) and can increase the power (the chance of finding a difference if one exists) of small studies, where the stratified factors have a large effect on outcomes. 46 Additionally, stratification can increase the validity of subgroup or interim analyses. 46 If too many strata are used, some strata might not be filled with equal patients in both groups, leading to imbalances in other prognostic factors. 46 Excessive stratification also unduly increases the complexity of trial administration, randomization, and analysis. Stratification is usually performed using only a small number of carefully selected variables likely to have a large impact on outcomes.

Adaptive randomization

Another strategy to minimize imbalances in prognostic factors is to use an adaptive randomization scheme when randomization is influenced by analysis of either the baseline characteristics or outcomes of previous patients. When treatment assignment is based on patient characteristics, the adaptive randomization procedure known as minimization assigns the next treatment to minimize any imbalance in prognostic factors among previously enrolled patients. For the computer algorithm to run, minimization should be limited to larger trials. 48

One response-adaptive randomization procedure used in trials examining a dichotomous outcomes (eg, yes/no or survival/death) involves the “play-the-winner” strategy to allocate treatment based on outcomes of the last patient enrolled ( Fig. 2 ). The more successful a treatment, the more likely that the next patient will be randomized to that treatment. 49 For short-term trials where the treatments have been well-evaluated for safety, play-the-winner trials can reduce the likelihood that a patient is assigned to an ineffective or harmful treatment. 49 Adaptive trials are being increasingly used in phase 1 or 2 cancer trials. 50 – 52 The downside is that these trials are complex to plan and analyze, quite susceptible to chronological bias, and might not be persuasive.

An external file that holds a picture, illustration, etc.
Object name is nihms46205f2.jpg

Adaptive randomization: play the winner randomization rule. In this example, the color of the ball represents the treatment assignment (white = treatment A, black = treatment B). A ball is selected and then replaced. Based on the outcomes of the treatment selected, a ball representing the same or opposite treatment is added to the sample. The rule repeats itself. If the two treatments have similar outcomes, then there will be an equal distribution of balls at the end of the trial. If one treatment has substantially better outcomes, then there will be more balls representing that treatment. Patients entering the trial have a better chance of having an effective treatment and less of a chance of having an ineffective or harmful treatment.

The most well-known and controversial play-the-winner randomized trial was the Michigan Extracorporeal Membrane Oxygenation (ECMO) trial for neonatal respiratory failure by Bartlett and colleagues. 53 Neonates with respiratory failure and a predicted ≥ 80% chance of mortality were randomized to either ECMO or conventional treatment. The investigators’ intent was to perform a methodologically sound, randomized trial that minimized the number of critically ill infants given the inferior treatment. The study was designed to end after 10 patients had received ECMO or 10 patients had received the control. The first patient enrolled received ECMO and survived; the second patient received the control and died. The trial was terminated after 11 patients, only 1 of whom received the control. The investigators concluded that ECMO improved survival when compared with conventional treatment in neonates with respiratory failure. The main criticism of the trial was that the control group included only one patient. 54 Widespread acceptance of ECMO for neonates with respiratory failure did not occur until after larger and more conventional trials were performed. 55

RANDOMIZED DESIGNS INCORPORATING PATIENT AND SURGEON PREFERENCES

Patient preference trials.

Strong patient preferences can result in failure to enroll patients into surgical RCTs or serve as a theoretical threat to validity. 6 , 35 , 56 Patients with strong preferences for one treatment can differ from those without, resulting in selection bias or a systematic difference in patients enrolled in trials from those not enrolled. 57 Patient preference can also be an independent prognostic factor or can interact with the treatment to affect outcomes, particularly in unblended RCTs. 58 For example, patients randomized to their preferred treatment can perform better because of increased compliance or a placebo effect, and patients randomized to their nonpreferred treatment can perform worse. 57 One potential solution is to measure baseline patient preferences and to mathematically adjust for the interaction between preference and treatment, but this approach increases sample size. 58

Another solution is to modify the trial design to incorporate patient or physician preferences using a comprehensive cohort design, Zelen’s design, or Wennberg’s design. 35 , 56 , 59 – 61 These trial designs have not been commonly used. In the comprehensive cohort design, patients without strong preferences are randomized, and patients with strong preferences are assigned to their treatment of choice. 35 It is often used in trials where participation in an RCT might be low because of strong patient preferences. Even if the proportion of patients randomized is low, this design should be less susceptible to bias than a nonrandomized cohort study, which is often encouraged by statisticians when problems in accrual can limit the power of a conventional randomized trial. Because of the lower randomization rates, these trials can be more expensive, require more total patients, and be more difficult to interpret than conventional trials. 57

The National Institute of Child Health and Human Development Neonatal Research Network is currently planning a comprehensive cohort trial comparing laparotomy and peritoneal drainage for extremely low birth weight infants with severe necrotizing enterocolitis or isolated intestinal perforation (personal communication, Blakely). These conditions in this patient population are associated with a 50% mortality rate and a 72% rate of death or neurodevelopmental impairment. Surgical practice varies widely; caregivers often have strong treatment preferences, 62 , 63 and the consent rate can be no higher than 50% because of problems obtaining consent for emergency therapies. In this trial, the same risk and outcomes data will be collected for nonrandomized patients (observational or preference cohort) and for randomized patients. In the primary analysis, treatment effect will be assessed as the relative risk of death or neurodevelopmental impairment with laparotomy (relative to drainage) among randomized patients. The relative risk for death or impairment with laparotomy will also be assessed in the observational cohort after adjusting for important known risk factors. If the relative risk for observational cohort is similar to that for randomized patients, all patients can be combined in a supplemental analysis to increase the power, precision, and generalizability of the study in assessing treatment effect. An analysis with all patients combined would not be performed if the relative risk is not comparable for the randomized patients and the observational cohort. In this circumstance, the difference might well be because of an inability to adjust for unknown or unmeasured confounders among patients treated according to physician or parent preference

Zelen’s design, also known as the postrandomization consent design, has two variants. In the single-consent design, patients randomized to the standard therapy are not informed of the trial or offered alternative therapy. Consent is sought only for patients randomized to the intervention. If consent is refused, they are administered standard therapy but analyzed with the intervention group. The single-consent design raises ethical concerns because patients are randomized before consent and because patients receiving standard therapy are included without informed consent of their participation in the trial. In the double-consent design, consent is sought for patients randomized to a standard therapy and those randomized to the intervention, and both groups are informed of the trial and both groups are allowed to receive the opposite treatment if consent is refused for the treatment to which they were randomized. Zelen’s design has been used to evaluate screening tools such as fecal occult blood testing for colorectal cancer. 64

With Wennberg’s design, patients are randomized to either a preference group or a randomization group. Patients in the preference group are offered their treatment of choice, and patients in the other group are assigned treatment based on randomization. All groups are analyzed to assess the impact of patient preference on outcomes. 35 , 59 Although patient preference trials are an alternative to RCTs, downsides include potential for additional differences between treatment groups other than preference and increased sample size requirements or cost to complete a trial. 34 , 57

EXPERTISE-BASED TRIALS

A proposed solution to the problem of variation between surgeons in skill and preference is the expertise-based RCT. In a conventional RCT evaluating two surgical procedures (eg, open versus laparoscopic hernia repair), a surgeon can be asked to perform both procedures, even though he or she might be adept with only one. Differential expertise bias can result from favoring the less technically challenging or more familiar procedure if a higher percentage of experienced surgeons performed that procedure. 45 Additionally, differential expertise can result in increased cross-over from one procedure to another (eg, conversion from laparoscopic to open hernia repair), or bias resulting from use of different co-interventions. 45

An expertise-based trial differs from a conventional RCT because surgeons perform only the procedure at which they believe they are most skilled. Proponents argue that expertise-based trials minimize bias resulting from differences in technical competency and surgeon preference, decrease crossover from one intervention to the other, and can be more ethical than conventional RCTs. 45 On the other hand, expertise-based RCTs present challenges in coordinating trials in which there are few experts for one or both procedures; changing surgeons after the initial patient contact, or generalizing the results to surgeons with less expertise. 45

For example, in a trial comparing open with endovascular aortic aneurysm repair, the investigators required each participating surgeon to have performed 20 endovascular aortic aneurysm repair procedures to control for expertise bias. 65 The trial demonstrated no difference in all-cause mortality between the groups. 66 Performance of 60 endovascular repairs, or 40 more than the minimum requirement for surgeon participation in this study, appears to be necessary to achieve an acceptable failure rate of < 10%. The minimum number of procedures required to participate in a trial is often less than the number needed to reach the plateau of the learning curve, biasing the results. 45 An expertise-based RCT, NExT ERA: National Expertise Based Trial of Elective Repair of Abdominal Aortic Aneurysms: A Pilot Study, is planned to prevent problems in interpreting the trial because of differential surgical expertise from affecting outcomes after aneurysm repair ( www.clinicaltrials.gov ; {"type":"clinical-trial","attrs":{"text":"NCT00358085","term_id":"NCT00358085"}} NCT00358085 ).

INTERNAL AND EXTERNAL VALIDITY IN RANDOMIZED TRIALS

Before applying the results of RCTs to individual patients, the internal and external validity of the trial must be examined. Internal validity refers to the adequacy of the trial design to provide a true estimate of association between an exposure and outcomes in patients studied, and external validity assesses the generalizability of the results to other patients. Threats to internal validity can result from either random or systematic error. Random errors result in errors in either direction, but systematic errors are a result of bias, resulting in consistent variation in the same direction. An example of random error is the up and down variability in blood pressure measurements based on the precision of an automatic cuff. A systematic error occurs when all of the blood pressure measurements are high because the cuff is too small. Bias can occur at any point in a trial, including during design, selection of the participants, execution of the intervention, outcomes measurement, data analysis, results interpretation, or publication. Specific types of bias include selection bias that results from systematic differences between treatment groups, confounding, ascertainment bias that results from lack of blinding of outcomes assessors, compliance bias because of differential adherence to the study protocols, and bias because of losses or withdrawals to followup. 67

External validity is dependent on multiple factors, including the characteristics of the participants, the intervention, and the setting of the trial. 68 Enrolled patients can differ substantially from eligible patients and ineligible patients with the condition of interest, or both, representing only a select population. An analysis of RCTs in high-impact medical journals found that only 47% of exclusion criteria were well-justified, and large subpopulations, such as women, children, the elderly, and patients with common medical conditions, were often excluded from RCTs. 69 In evaluating external validity, the difference between efficacy (explanatory) and effectiveness (management or pragmatic) trials must also be considered. Efficacy trials test whether therapies work under ideal conditions (eg, highly protocolized interventions and small number of homogeneous patients), and effectiveness trials test whether therapies work under routine or “real-world” circumstances (eg, large number of diverse patients and broad range of clinically acceptable co-interventions). Efficacy trials maximize internal validity and effectiveness trials emphasize external validity. 70

Well-designed RCTs reduce systematic errors from selection bias, biased treatment assignment, ascertainment bias, and confounding. Even with adequate randomization and allocation concealment, as described here, both random and systematic errors can still occur. Larger sample sizes can decrease the risk of imbalances because of chance and can increase the external validity of trials as well by including more diverse patients (eg, pragmatic trials). A description of all potential threats to validity is beyond the scope of this article.

Despite the perceived barriers to performing randomized clinical trials in surgery, they remain the gold standard for evaluating an intervention. Surgeons must be aware of the potential methodologic flaws that can invalidate results, both in interpreting and applying the literature and in designing future trials. To promote rigorous, high-quality studies, surgeons should be aware of variations in trial design, and increase use of alternative designs when conventional trials would not be feasible or suitable.

Acknowledgments

Dr Kao is supported by the Robert Wood Johnson Foundation Physician Faculty Scholars Award and the National Institutes of Health (K23RR020020-01). Dr Lally was supported by the National Institutes of Health (K24RR17050-05).

Competing Interests Declared: None .

Author Contributions Study conception and design: Kao, Lally

Acquisition of data: Kao

Analysis and interpretation of data: Kao, Lally

Drafting of manuscript: Kao, Tyson, Blakely, Lally

Critical revision: Kao, Tyson, Blakely, Lally

  • Open access
  • Published: 07 June 2024

Effects of intensive lifestyle changes on the progression of mild cognitive impairment or early dementia due to Alzheimer’s disease: a randomized, controlled clinical trial

  • Dean Ornish 1 , 2 ,
  • Catherine Madison 1 , 3 ,
  • Miia Kivipelto 4 , 5 , 6 , 7 ,
  • Colleen Kemp 8 ,
  • Charles E. McCulloch 9 ,
  • Douglas Galasko 10 ,
  • Jon Artz 11 , 12 ,
  • Dorene Rentz 13 , 14 , 15 ,
  • Jue Lin 16 ,
  • Kim Norman 17 ,
  • Anne Ornish 1 ,
  • Sarah Tranter 8 ,
  • Nancy DeLamarter 1 ,
  • Noel Wingers 1 ,
  • Carra Richling 1 ,
  • Rima Kaddurah-Daouk 18 ,
  • Rob Knight 19 ,
  • Daniel McDonald 20 ,
  • Lucas Patel 21 ,
  • Eric Verdin 22 , 23 ,
  • Rudolph E. Tanzi 13 , 24 , 25 , 26 &
  • Steven E. Arnold 13 , 27  

Alzheimer's Research & Therapy volume  16 , Article number:  122 ( 2024 ) Cite this article

121 Altmetric

Metrics details

Evidence links lifestyle factors with Alzheimer’s disease (AD). We report the first randomized, controlled clinical trial to determine if intensive lifestyle changes may beneficially affect the progression of mild cognitive impairment (MCI) or early dementia due to AD.

A 1:1 multicenter randomized controlled phase 2 trial, ages 45-90 with MCI or early dementia due to AD and a Montreal Cognitive Assessment (MoCA) score of 18 or higher. The primary outcome measures were changes in cognition and function tests: Clinical Global Impression of Change (CGIC), Alzheimer’s Disease Assessment Scale (ADAS-Cog), Clinical Dementia Rating–Sum of Boxes (CDR-SB), and Clinical Dementia Rating Global (CDR-G) after 20 weeks of an intensive multidomain lifestyle intervention compared to a wait-list usual care control group. ADAS-Cog, CDR-SB, and CDR-Global scales were compared using a Mann-Whitney-Wilcoxon rank-sum test, and CGIC was compared using Fisher’s exact test. Secondary outcomes included plasma Aβ42/40 ratio, other biomarkers, and correlating lifestyle with the degree of change in these measures.

Fifty-one AD patients enrolled, mean age 73.5. No significant differences in any measures at baseline. Only two patients withdrew. All patients had plasma Aβ42/40 ratios <0.0672 at baseline, strongly supporting AD diagnosis. After 20 weeks, significant between-group differences in the CGIC ( p = 0.001), CDR-SB ( p = 0.032), and CDR Global ( p = 0.037) tests and borderline significance in the ADAS-Cog test ( p = 0.053). CGIC, CDR Global, and ADAS-Cog showed improvement in cognition and function and CDR-SB showed significantly less progression, compared to the control group which worsened in all four measures. Aβ42/40 ratio increased in the intervention group and decreased in the control group ( p = 0.003). There was a significant correlation between lifestyle and both cognitive function and the plasma Aβ42/40 ratio. The microbiome improved only in the intervention group ( p <0.0001).

Conclusions

Comprehensive lifestyle changes may significantly improve cognition and function after 20 weeks in many patients with MCI or early dementia due to AD.

Trial registration

Approved by Western Institutional Review Board on 12/31/2017 (#20172897) and by Institutional Review Boards of all sites. This study was registered retrospectively with clinicaltrials.gov on October 8, 2020 (NCT04606420, ID: 20172897).

Increasing evidence links lifestyle factors with the onset and progression of dementia, including AD. These include unhealthful diets, being sedentary, emotional stress, and social isolation.

For example, a Lancet commission on dementia prevention, intervention, and care listed 12 potentially modifiable risk factors that together account for an estimated 40% of the global burden of dementia [ 1 ]. Many of these factors (e.g., hypertension, smoking, depression, type 2 diabetes, obesity, physical inactivity, and social isolation) are also risk factors for coronary heart disease and other chronic illnesses because they share many of the same underlying biological mechanisms. These include chronic inflammation, oxidative stress, insulin resistance, telomere shortening, sympathetic nervous system hyperactivity, and others [ 2 ]. A recent study reported that the association of lifestyle with cognition is mostly independent of brain pathology, though a part, estimated to be only 12%, was through β-amyloid [ 3 ].

In one large prospective study of adults 65 or older in Chicago, the risk of developing AD was 38% lower in those eating high vs low amounts of vegetables and 60% lower in those consuming omega-3 fatty acids at least once/week, [ 4 ] whereas consuming saturated fat and trans fats more than doubled the risk of developing AD [ 5 ].A systematic review and meta-analysis of 243 observational prospective studies and 153 randomized controlled trials found a similar relationship between these and similar risk factors and the onset of AD [ 6 ].

The multifactorial etiology and heterogeneity of AD suggest that multidomain lifestyle interventions may be more effective than single-domain ones for reducing the risk of dementia, and that more intensive multimodal lifestyle interventions may be more efficacious than moderate ones at preventing dementia [ 7 ].

For example, in the Finnish Geriatric Intervention Study (FINGER) study, a RCT of men and women 60-77 in age with Cardiovascular Risk Factors, Aging, and Incidence of Dementia (CAIDE) dementia risk scores of at least 6 points and cognition at mean or slightly lower, a multimodal intervention of diet, exercise, cognitive training, vascular risk monitoring maintained cognitive function after 2 years in older adults at increased risk of dementia [ 8 ]. After 24 months, global cognition in the FINGER intervention group was 25% higher than in the control group which declined. Moreover, the FINGER intervention was equally beneficial regardless of several demographic and socioeconomic risk factors [ 9 ] and apolipoprotein E (APOE) ε4 status [ 10 ].

The FINGER lifestyle intervention also resulted in a 13-20% reduction in rates of cardiovascular disease events (stroke, transient ischemic attack, or coronary), providing more evidence that “what’s good for the heart is good for the brain”(and vice versa) [ 11 ]. Other large-scale multidomain intervention studies to determine if this intervention can help prevent dementia are being conducted or planned in over 60 countries worldwide, as part of the World-Wide FINGERS network, including the POINTER study in the U.S. [ 12 , 13 ].

More recently, a similar dementia prevention-oriented RCT showed that a 2-year personalized multidomain intervention led to modest improvements in cognition and dementia risk factors in those at risk for (but not diagnosed with) dementia and AD [ 14 ].

All these studies showed that lifestyle changes may help prevent dementia. The study we are reporting here is the first randomized, controlled clinical trial to test whether intensive lifestyle changes may beneficially affect those already diagnosed with mild cognitive impairment (MCI) or early dementia due to AD.

In two earlier RCTs, we found that the same multimodal lifestyle intervention described in this article resulted in regression of coronary atherosclerosis as measured by quantitative coronary arteriography [ 15 ] and ventricular function, [ 16 ] improvements in myocardial perfusion as measured by cardiac PET scans, and 2.5 times fewer cardiac events after five years, all of which were statistically significant [ 17 ]. Until then, it was believed that coronary heart disease progression could only be slowed, not stopped or reversed, similar to how MCI or early dementia due to AD are viewed today.

Since AD and coronary heart disease share many of the same risk factors and biological mechanisms, and since moderate multimodal lifestyle changes may help prevent AD, [ 18 ] we hypothesized that a more intensive multimodal intervention proven to often reverse the progression of coronary heart disease and some other chronic diseases may also beneficially affect the progression of MCI or early dementia due to AD.

We report here results of a randomized controlled trial to determine if the progression of MCI or early dementia due to AD may be slowed, stopped, or perhaps even reversed by a comprehensive, multimodal, intensive lifestyle intervention after 20 weeks when compared to a usual-care randomized control group. This lifestyle intervention includes (1) a whole foods, minimally processed plant-based diet low in harmful fats and low in refined carbohydrates and sweeteners with selected supplements; (2) moderate exercise; (3) stress management techniques; and (4) support groups.

This intensive multimodal lifestyle modification RCT sought to address the following questions:

Can the specified multimodal intensive lifestyle changes beneficially affect the progression of MCI or early dementia due to AD as measured by the AD Assessment Scale–Cognitive Subscale (ADAS-Cog), CGIC (Clinical Global Impression of Change), CDR-SB (Clinical Dementia Rating Sum of Boxes), and CDR-G (Clinical Dementia Rating Global) testing?

Is there a significant correlation between the degree of lifestyle change and the degree of change in these measures of cognition and function?

Is there a significant correlation between the degree of lifestyle change and the degree of change in selected biomarkers (e.g., the plasma Aβ42/40 ratio)?

Participants and methods

This study was a 1:1 multi-center RCT during the first 20 weeks of the study, and these findings are reported here. Patients who met the clinical trial inclusion criteria were enrolled between September 2018 and June 2022.

Participants were enrolled who met the following inclusion criteria:

Male or female, ages 45 to 90

Current diagnosis of MCI or early dementia due to AD process, with a MoCA score of 18 or higher (National Institute on Aging–Alzheimer’s Association McKhann and Albert 2011 criteria) [ 19 , 20 ]

Physician shared this diagnosis with the patient and approved their participation in this clinical trial

Willingness and ability to participate in all aspects of the intervention

Availability of spouse or caregiver to provide collateral information and assist with study adherence

Patients were excluded if they had any of the following:

Moderate or severe dementia

Physical disability that precludes regular exercise

Evidence for other primary causes of neurodegeneration or dementia, e.g., significant cerebrovascular disease (whose primary cause of dementia was vascular in origin), Lewy Body disease, Parkinson's disease, FTD

Significant ongoing psychiatric or substance abuse problems

Fifty-one participants with MCI or early-stage dementia due to AD who met these inclusion criteria were enrolled between September 2018 and June 2022 and underwent baseline testing. 26 of the enrolled participants were randomly assigned to an intervention group that received the multimodal lifestyle intervention for 20 weeks and 25 participants were randomly assigned to a usual habits and care control group that was asked not to make any lifestyle changes for 20 weeks, after which they would be offered the intervention. Patients in both groups received standard of care treatment managed by their own neurologist.

The intervention group received the lifestyle program for 20 weeks (initially in person, then via synchronous Zoom after March 2020 due to COVID-19). Two participants who did not want to continue these lifestyle changes withdrew during this time, both in the intervention group (one male, one female). Participants in both groups completed a follow-up visit at 20 weeks, where clinical and cognitive assessments were completed. Data were analyzed comparing the baseline and 20 week assessments between the groups.

In a drug trial, access to an investigational new drug can be restricted from participants in a randomized control group. However, we learned in our prior clinical trials of this lifestyle intervention with other diseases that it is often difficult to persuade participants who are randomly assigned to a usual-care control group to refrain from making these lifestyle changes for more than 20 weeks, which is why this time duration was chosen. If participants in both groups made similar lifestyle changes, then it would not be possible to show differences between the groups. Therefore, to encourage participants randomly assigned to the control group not to make lifestyle changes during the first 20 weeks, we offered to provide them the same lifestyle program at no cost to them for 20 weeks after being in the usual-care control group and tested after 20 weeks.

We initially planned to enroll 100 patients into this study based on power calculations of possible differences between groups in cognition and function after 20 weeks. However, due to challenges in recruiting patients, especially with the COVID-19 emergency and that many pharma trials began recruiting patients with similar criteria, it took longer to enroll patients than initially planned [ 21 ]. Because of this, we terminated recruitment after 51 patients were enrolled. This decision was based only on recruitment issues and limited funding, without reviewing the data at that time.

Patients were recruited from advertisements, presentations at neurology meetings, referrals from diverse groups of neurologists and other physicians, and a search of an online database of patients at UCSF. We put a special emphasis on recruiting diverse patients, although we were less successful in doing so than we hoped (Table 1 ).

This clinical trial was approved by the Western Institutional Review Board on 12/31/2017 (approval number: 20172897) and all participants and their study partners provided written informed consent. The trial protocol was also approved by the appropriate Institutional Review Board of all participating sites, and all subjects provided informed consent. Due to the COVID-19 emergency, planned MRI and amyloid PET scans were no longer feasible, and the number of cognition and function tests was decreased. An initial inclusion criterion of “current diagnosis of mild to moderate dementia due to AD (McKhann et al., 2011)” was further clarified to include a MoCA score of 18 or higher. This study was registered with clinicaltrials.gov on October 8, 2020 (NCT04606420, Unique Protocol ID: 20172897) retrospectively due to an administrative error. None of the sponsors who provided funding for this study participated in its design, conduct, management, or reporting of the results. Those providing the lifestyle intervention were separate from those performing testing and from those collecting and analyzing the data, who were blinded to group assignment. All authors contributed to manuscript draft revisions, provided critical comment, and approved submission for publication.

Any modifications in the protocol were approved in advance and in writing by the senior biostatistician (Charles McCulloch PhD) or the senior expert neuropsychologist (Dorene Rentz PsyD), and subsequently approved by the WIRB.

Patients were initially recruited only from the San Francisco Bay area beginning October 2018 and met in person until February 2020 when the COVID-19 pandemic began. Subsequently, this multimodal lifestyle intervention was offered to patients at home in real time via Zoom.

Offering this intervention virtually provided an opportunity to recruit patients from multiple sites, including the Massachusetts General Hospital/Harvard Medical School, Boston, MA; the University of California, San Diego; and Renown Regional Medical Center, Reno, NV, as well as with neurologists in the San Francisco Bay Area. These participants were recruited and tested locally at each site and the intervention was provided via Zoom and foods were sent directly to their home.

Patient recruitment

This is described in the Supplemental Materials section.

Intensive multimodal lifestyle intervention

Each patient received a copy of a book which describes this lifestyle medicine intervention for other chronic diseases. [ 2 ]

A whole foods minimally-processed plant-based (vegan) diet, high in complex carbohydrates (predominantly fruits, vegetables, whole grains, legumes, soy products, seeds and nuts) and especially low in harmful fats, sweeteners and refined carbohydrates. It was approximately 14-18% of calories as total fat, 16-18% protein, and 63-68% mostly complex carbohydrates. Calories were unrestricted. Those with higher caloric needs were given extra portions.

To assure the high adherence and standardization required to adequately test the hypothesis, 21 meals/week and snacks plus the daily supplements listed below were provided throughout the 40 weeks of this intervention to each study participant and his or her spouse or study partner at no cost to them. Twice/week, we overnight shipped to each patient as well as to their spouse or study partner three meals plus two snacks per day that met the nutritional guidelines as well as the prescribed nutritional supplements.

We asked participants to consume only the food and nutritional supplements we sent to them and no other foods. We reasoned that if adherence to the diet and lifestyle intervention was high, whatever outcomes we measured would be of interest. That is, if patients in the intervention group were adherent but showed no significant benefits, that would be a disappointing but an important finding. If they showed improvement, that would also be an important finding. But if they did not follow the lifestyle intervention sufficiently, then we would not have been able to adequately test the hypotheses.

Aerobic (e.g., walking) at least 30 minutes/day and mild strength training exercises at least three times per week from an exercise physiologist in person or with virtual sessions. Patients were given a personalized exercise prescription based on age and fitness level. All sessions were overseen by a registered nurse.

  • Stress management

Meditation, gentle yoga-based poses, stretching, progressive relaxation, breathing exercises, and imagery for a total of one hour per day, supervised by a certified stress management specialist. The purpose of each technique was to increase the patient’s sense of relaxation, concentration, and awareness. They were also given access to online meditations. Patients had the option of using flashing-light glasses at a theta frequency of 7.83 Hz plus soothing music as an aid to meditation and insomnia [ 22 ]. They were also encouraged to get adequate sleep.

Group support

Participants and their spouses/study partners participated in a support group one hour/session, three days/week, supervised by a licensed mental health professional in a supportive, safe environment to increase emotional support and community as well as communication skills and strategies for maintaining adherence to the program. They also received a book with memory exercises used periodically during group sessions [ 23 ].

To reinforce this lifestyle intervention, each patient and their spouse or study partner met three times/week, four hours/session via Zoom: 2

one hour of supervised exercise (aerobic + strength training)

one hour of stress management practices (stretching, breathing, meditation, imagery)

one hour of a support group

one hour lecture on lifestyle

Additional optional exercise and stress management classes were provided.

Supplements

Omega-3 fatty acids with Curcumin (1680 mg omega-3 & 800 mg Curcumin, Nordic Naturals ProOmega CRP, 4 capsules/day). Omega-3 fatty acids: In those age 65 or older, those consuming omega-3 fatty acids once/week or more had a 60% lower risk of developing AD, and total intake of n-3 polyunsaturated fatty acids was associated with reduced risk of Alzheimer disease [ 24 ]. Curcumin targets inflammatory and antioxidant pathways as well as (directly) amyloid aggregation, [ 25 ] although there may be problems with bioavailability and crossing the blood-brain barrier [ 26 ].

Multivitamin and Minerals (Solgar VM-75 without iron, 1 tablet/day). Combinatorial formulations demonstrate improvement in cognitive performance and the behavioral difficulties that accompany AD [ 27 ].

Coenzyme Q10 (200 mg, Nordic Naturals, 2 soft gels/day). CoQ10. May reduce mitochondrial impairment in AD [ 28 ].

Vitamin C (1 gram, Solgar, 1 tablet/day): Maintaining healthy vitamin C levels may have a protective function against age-related cognitive decline and AD [ 29 ].

Vitamin B12 (500 mcg, Solgar, 1 tablet/day): B12 hypovitaminosis is linked to the development of AD pathology [ 30 ].

Magnesium L-Threonate (Mg) (144 mg, Magtein, 2 tablets/day). A meta-analysis found that Mg deficiency may be a risk factor of AD and Mg supplementation may be an adjunctive treatment for AD [ 31 ].

Hericium erinaceus (Lion’s Mane, Stamets Host Defense, 2 grams/day): Lion’s mane may produce significant improvements in cognition and function in healthy people over 50 [ 32 ] and in MCI patients compared to placebo [ 33 ].

Super Bifido Plus Probiotic (Flora, 1 tablet/day). A meta-analysis suggests that probiotics may benefit AD patients [ 34 ].

Primary outcome measures: cognition and function testing

Four tests were used to assess changes in cognition and function in these patients. These are standard measures of cognition and function included in many FDA drug trials: ADAS-Cog; Clinical Global Impression of Change (CGIC); Clinical Dementia Rating Sum of Boxes (CDR-SB); Clinical Dementia Rating Global (CDR Global). All cognition and function raters were trained psychometrists with experience in administering these tests in clinical trials. Efforts were made to have the same person perform cognitive testing at each visit to reduce inter-observer variability. Those doing ADAS-Cog assessments were certified raters and tested patients in person. The CGIC and CDR tests were administered for all patients via Zoom by different raters than the ADAS-cog. Also, raters were blind to treatment arm to the degree possible.

Secondary outcome measures: biomarkers and microbiome

These are described in the Supplemental Materials section. These include blood-based biomarkers (such as the plasma Aβ42/40 ratio) and microbiome taxa (organisms).

Statistical methods

These are described in the Supplemental Materials section.

The recruitment effort for this trial lasted from 01/23/2018 to 6/16/2022. The most effective recruitment method was referral from the subjects’ physician or healthcare provider. Additional recruitment efforts included advertising in print and digital media; speaking to community groups; mentioning the study during podcast and radio interviews; collaborating with research institutions that provide dementia diagnosis and treatment; and contracting a clinical trials recruitment service (Linea). A total of 1585 people contacted us; of these, 1300 did not meet the inclusion criteria, 102 declined participation, and 132 were screening incomplete when enrollment closed, resulting in the enrollment of 51 participants (Fig. 1 ).

figure 1

CONSORT flowchart: patients, demographics, and enrollment

The remaining 51 patients were randomized to an intervention group (26 patients) that received the lifestyle intervention for 20 weeks or to a usual-care control group (25 patients) that was asked not to make any lifestyle changes. Two patients in the intervention group withdrew during the intervention because they did not want to continue the diet and lifestyle changes. No patients in the control group withdrew prior to 20-week testing. Analyses were performed on the remaining 49 patients. No patients were lost to follow-up.

All of these 49 patients had plasma Aβ42/40 ratios <0.089 (all were <0.0672), strongly supporting the diagnosis of Alzheimer’s disease [ 35 ].

At baseline, there were no statistically significant differences between the intervention group and the randomized control group in any measures, including demographic characteristics, cognitive function measures, or biomarkers (Table 1  and Table 2 ).

Cognition and function testing: primary analysis

Results after 20 weeks of a multimodal intensive lifestyle intervention in all patients showed overall statistically significant differences between the intervention group and the randomized control group in cognition and function in the CGIC ( p = 0.001), CDR-SB ( p = 0.032), and CDR Global ( p = 0.037) tests and of borderline significance in the ADAS-Cog test ( p = 0.053, Table 3 ). Three of these measures (CGIC, CDR Global, ADAS-Cog) showed improvement in cognition and function in the intervention group and worsening in the control group, and one test (CDR-SB) showed significantly less progression when compared to the randomized control group, which worsened in all four of these measures.

PRIMARY ANALYSIS (with outlier included), Table 3 :

CGIC (Clinical Global Impression of Change)

These scores improved in the intervention group and worsened in the control group.

(Fisher’s exact p -value = 0.001). 10 people in the intervention group showed improvement compared to none in the control group. 7 people in the intervention group and 8 people in the control group were unchanged. 7 people in the intervention group showed minimal worsening compared to 14 in the control group. None in the intervention group showed moderate worsening compared to 3 in the control group.

CDR-Global (Clinical Dementia Rating-Global)

These scores improved in the intervention group (from 0.69 to 0.65) and worsened in the randomized control group (from 0.66 to 0.74), mean difference = 0.12, p = 0.037 (Table 3 and Fig. 2 ).

figure 2

Changes in CDR-Global (lower = improved)

ADAS-Cog (Alzheimer’s Disease Assessment Scale)

These scores improved in the intervention group (from 21.551 to 20.536) and worsened in the randomized control group (from 21.252 to 22.160), mean group difference of change = 1.923 points, p = 0.053 (Table 3 and Fig. 3 ). (ADAS-Cog testing in one intervention group patient was not administered properly so it was excluded.)

figure 3

Changes in ADAS-Cog (lower = improved)

CDR-SB (Clinical Dementia Rating Sum of Boxes)

These scores worsened significantly more in the control group (from 3.34 to 3.86) than in the intervention group (from 3.27 to 3.35), mean group difference = 0.44, p = 0.032 (Table 3 and Fig. 4 ).

figure 4

Changes in CDR-SB (lower = improved)

There were no significant differences in depression scores as measured by PHQ-9 between the intervention and control groups.

Secondary sensitivity analyses

One patient in the intervention group was a clear statistical outlier in his cognitive function testing based on standard mathematical definitions (none was an outlier in the control group) [ 36 ]. Therefore, this patient’s data were excluded in a secondary sensitivity analysis. These results showed statistically significant differences in all four of these measures of cognition and function (Table 4 ). Three measures (ADAS-Cog, CGIC, and CDR Global) showed significant improvement in cognition and function and one (CDR-SB) showed significantly less worsening when compared to the randomized control group, which worsened in all four of these measures.

Sensitivity analysis (with outlier excluded)

There were no significant differences in depression scores as measured by PHQ-9 between the intervention and control groups in either analysis.

A reason why this patient might have been a statistical outlier is that he reported intense situational stress before his testing. As a second sensitivity analysis, this same outlier patient was retested when he was calmer, and all four measures (ADAS-Cog, CGIC, CDR Global, and CDR-SB) showed significant improvement in cognition and function, whereas the randomized control group worsened in all four of these measures.

Biomarker results

We selected biomarkers that have a known role in the pathophysiology of AD (Table 5 ). Of note is that the plasma Aβ42/40 ratio increased in the intervention group but decreased in the randomized control group ( p = 0.003, two-tailed).

Correlation of lifestyle index and cognitive function

In the current clinical trial, despite the inherent limitations of self-reported data, we found statistically significant correlations between the degree of lifestyle change (from baseline to 20 weeks) and the degree of change in three of four measures of cognition and function as well as correlations between the adherence to desired lifestyle changes at just the 20-week timepoint and the degree of change in two of the four measures of cognition and function and borderline significance in the fourth measure.

Correlation with lifestyle at 20 weeks: p = 0.052; correlation: 0.241

Correlation with degree of change in lifestyle: p = 0.015; correlation: 0.317

Correlation with lifestyle at 20 weeks: p = 0.043; correlation: 0.251

Correlation with degree of change in lifestyle: p = 0.081; correlation: 0.205

Correlation with lifestyle at 20 weeks: p = 0.065; correlation: 0.221

Correlation with degree of change in lifestyle: p = 0.024; correlation: 0.286

Correlation with lifestyle at 20 weeks: p = 0.002

Correlation with degree of change in lifestyle: p = 0.0005

(CGIC tests are non-parametric analyses, so standard effect size calculations are not included for this measure.)

Also, we also found a significant correlation between dietary total fat intake and changes in the CGIC measure ( p = 0.001), but this was not significant for the other three measures.

Correlation of lifestyle index and biomarker data

In the current clinical trial, despite the inherent limitations of self-reported data, we found statistically significant correlations between the degree of lifestyle change (from baseline to 20 weeks) and the degree of change in many of the key biomarkers, as well as correlations between the degree of lifestyle change at 20 weeks and the degree of change in these biomarkers:

Plasma Aβ42/40 ratio

Correlation with lifestyle at 20 weeks: p = 0.035; correlation: 0.306

Correlation with degree of change in lifestyle: p = 0.068; correlation: 0.266

Correlation with lifestyle at 20 weeks: p = 0.011; correlation: 0.363

Correlation with degree of change in lifestyle: p = 0.007; correlation: 0.383

LDL-cholesterol

Correlation with lifestyle at 20 weeks: p < 0.0001; correlation: 0.678

Correlation with degree of change in lifestyle: p < 0.0001; correlation: 0.628

Beta-Hydroxybutyrate (ketones)

Correlation with lifestyle at 20 weeks: p = 0.013; correlation: 0.372

Correlation with degree of change in lifestyle: p = 0.034; correlation: 0.320

Correlation with lifestyle at 20 weeks: p = 0.228; correlation: 0.177

Correlation with degree of change in lifestyle: p = 0.135; correlation: 0.219

GFAP/glial fibrillary acidic protein

Correlation with lifestyle at 20 weeks: p = 0.096; correlation: 0.243

Correlation with degree of change in lifestyle: p =0.351; correlation: 0.138

What degree of lifestyle change is correlated with improvement in cognitive function tests?

What degree of lifestyle is needed to stop or improve the worsening of MCI or early dementia due to AD? In other words, what % of adherence to the lifestyle intervention was correlated with no change in MCI or dementia across both groups? Higher adherence than this degree of lifestyle change was associated with improvement in MCI or dementia.

Correlation with lifestyle at 20 weeks: 71.4% adherence

Correlation with lifestyle at 20 weeks: 120.6% adherence

CDR-Global:

Correlation with lifestyle at 20 weeks: 95.6%

Microbiome results

There was a significant and beneficial change in the microbiome configuration in the intervention group but not in the control group.

Several taxa (groups of microorganisms) that increased only in the intervention group were consistent with those involved in reduced AD risk in other studies. For example, Blautia, which increased during the intervention in the intervention group, has previously been associated with a lower risk of AD, potentially due to its involvement in increasing γ-aminobutyric acid (GABA) production [ 37 ].  Eubacterium also increased during the intervention in the intervention group, and prior studies have identified Eubacterium genera (namely Eubacterium fissicatena) as a protective factor in AD [ 38 ].

Also, there was a decrease in relative abundance of taxa involved in increased AD risk in the intervention group, e.g., Prevotella and Turicibacter , the latter of which has been associated with relevant biological processes such as 5-HT production. Prevotella and Turicibacter have previously been shown to increase with disease progression, [ 39 ] and these taxa decreased over the course of the intervention.

These results support the hypothesis that the lifestyle intervention may beneficially modify specific microbial groups in the microbiome: increasing those that lower the risk of AD and decreasing those that increase the risk of AD. (Please see Supplement for more detailed information.)

We report the first randomized, controlled trial showing that an intensive multimodal lifestyle intervention may significantly improve cognition and function and may allay biological features in many patients with MCI or early dementia due to AD after 20 weeks.

After 20 weeks of a multimodal intensive lifestyle intervention, results of the primary analysis when all patients were included showed overall statistically significant differences between the intervention group and the randomized control group in cognition and function as measured by the CGIC ( p = 0.001), CDR-SB ( p = 0.032), and CDR Global ( p = 0.037) tests and of borderline significance in the ADAS-Cog test ( p = 0.053).

Three of these measures (CGIC, CDR Global, ADAS-Cog) showed improvement in cognition and function in the intervention group and worsening in the randomized control group, and one test (CDR-SB) showed less progression in the intervention group when compared to the control group which worsened in all four of these measures.

These differences were even clearer in a secondary sensitivity analysis when a mathematical outlier was excluded. These results showed statistically significant differences between groups in all four of these measures of cognition and function. Three of these measures showed improvement in cognition and function and one (CDR-SB) showed less deterioration when compared to the randomized control group, which worsened in all four of these measures.

The validity of these changes in cognition and function and possible biological mechanisms of improvement is supported by the observed changes in several clinically relevant biomarkers that showed statistically significant differences in a beneficial direction after 20 weeks when compared to the randomized control group.

One of the most clinically relevant biomarkers is the plasma Aβ42/40 ratio, which increased by 6.4% in the intervention group and decreased by 8.3% in the randomized control group after 20 weeks, and these differences were statistically significant ( p = 0.003, two-tailed).

In the lecanemab trial, plasma levels of the Aβ42/40 biomarker increased in the intervention group over 18 months with the presumption that this reflected amyloid moving from the brain to the plasma [ 40 ]. We found similar results in the direction of change in the plasma Aβ42/40 ratio from this lifestyle intervention but in only 20 weeks. Conversely, this biomarker decreased in the control group (as in the lecanemab trial), which may indicate increased cerebral uptake of amyloid.

Other clinically relevant biomarkers also showed statistically significant differences (two-tailed) in a beneficial direction after 20 weeks when compared to the randomized control group. These include hemoglobin A1c, insulin, glycoprotein acetyls (GlycA), LDL-cholesterol, and β-Hydroxybutyrate (ketone bodies).

Improvement in these biomarkers provides more biological plausibility for the observed improvements in cognition and function as well as more insight into the possible mechanisms of improvement. This information may also help in predicting which patients are more likely to show improvements in cognition and function by making these intensive lifestyle changes.

Other relevant biomarkers were in a beneficial direction of change in the intervention group compared with the randomized control group after 20 weeks. These include pTau181, GFAP, CRP, SAA, and C-peptide. Telomere length increased in the intervention group and was essentially unchanged in the control group. These differences were not statistically significant even when there was an order of magnitude difference between groups (as with GFAP and pTau181) or an almost four-fold difference (as with CRP), but these changes were in a beneficial direction. At least in part, these findings may be due to a relatively small sample size and/or a short duration of only 20 weeks.

We found a statistically significant dose-response correlation between the degree of lifestyle changes in both groups (“lifestyle index”) and the degree of change in many of these biomarkers. This correlation was found in both the degree of change in lifestyle from baseline to 20 weeks as well as the lifestyle measured at 20 weeks. These correlations also add to the biological plausibility of these findings.

We also found a statistically significant dose-response correlation between the degree of lifestyle changes in both groups (“lifestyle index”) and changes in most measures of cognition and function testing. In short, the more these AD patients changed their lifestyle in the prescribed ways, the greater was the beneficial impact on their cognition and function. These correlations also add to the biological plausibility of these findings. This variation in adherence helps to explain in part why some patients in the intervention group improved and others did not, but there are likely other mechanisms that we do not fully understand that may play a role. These statistically significant correlations are especially meaningful given the greater variability of self-reported data, the relatively small sample size, and the short duration of the intervention.

These findings are consistent with earlier clinical trials in which we used this same lifestyle intervention and the same measure of lifestyle index and found significant dose-response correlations between this lifestyle index (i.e., the degree of lifestyle changes) and changes in the degree of coronary atherosclerosis (percent diameter stenosis) in coronary heart disease; [ 41 , 45 ] changes in PSA levels and LNCaP cell growth in men with prostate cancer; [ 42 ] and changes in telomere length [ 43 ].

We also found significant differences between the intervention and control groups in several taxa (groups of micro-organisms) in the microbiome which may be beneficial.

There were no significant differences in depression scores as measured by PHQ-9 between the intervention and control groups. Therefore, reduction in depression is unlikely to account for the overall improvements in cognition and function seen in the intervention group patients.

We also found that substantial lifestyle changes were required to stop the progression of MCI in these patients. In the primary analysis, this ranged from 71.4% adherence for ADAS-Cog to 95.6% adherence for CDR-Global to 120.6% adherence for CDR-SB. In other words, extensive lifestyle changes were required to stop or improve cognition and function in these patients. This helps to explain why other studies of less-intensive lifestyle interventions may not have been sufficient to stop deterioration or improve cognition and function.

For example, comparing these results to those of the MIND-AD clinical trial provides more biological plausibility for both studies [ 44 ]. That is, more moderate multimodal lifestyle changes may slow the rate of worsening of cognition and function in MCI or early dementia due to early-stage AD, whereas more intensive multimodal lifestyle changes may result in overall average improvements in many measures of cognition and function when compared to a randomized usual-care control group in both clinical trials.

Lifestyle changes may provide additional benefits to patients on drug therapy. Anti-amyloid antibodies have shown modest effects on slowing progression, but they are expensive, have potential for adverse events, are not yet widely available, and do not result in overall cognitive improvement [ 40 ]. Perhaps there may be synergy from doing both.

Limitations

This study has several limitations. Only 51 patients were enrolled and randomized in our study, and two of these patients (both in the intervention group) withdrew during the trial. Showing statistically significant differences across different tests of cognition and function and other measures despite the relatively small sample size suggests that the lifestyle intervention may be especially effective and has strong internal validity.

However, the smaller sample size limits generalizability, especially since there was much less racial and ethnic diversity in this sample than we strived to achieve. Also, we measured these differences despite the relative insensitivity of these measures, which might have increased the likelihood of a type II error.

Raters were blinded to the group assignment of the participants. However, unlike a double-blind placebo-controlled drug trial, it is not possible to blind subjects in a lifestyle intervention about whether or not they are receiving the intervention. This might have affected outcome measures, although to reduce positive expectations and because it was true, patients were told during the study that we did not know whether or not this lifestyle intervention would be beneficial, and we said that whatever we showed would be useful.

Also, 20 weeks is a relatively short time for any intervention with MCI or early dementia due to AD. We did not include direct measures of brain structure in this trial, so we cannot determine whether there were direct impacts on markers of brain pathology relevant to AD. However, surrogate markers such as the plasma Aβ42/40 ratio are becoming more widely accepted.

Not all patients in the intervention group improved. Of the 24 patients in the intervention group, 10 showed improvement as measured by the CGIC test, 7 were unchanged, and 7 worsened. In the control group, none improved, 8 were unchanged, and 17 worsened. In part, this may be explained by variations in adherence to the lifestyle intervention, as there was a significant relationship between the degree of lifestyle change and the degree of change in cognition and function across both groups. We hope that further research may further clarify other factors and mechanisms to help explain why cognition and function improved in some patients but not in others.

The findings on the degree of lifestyle change required to stop the worsening or improve cognition and function need to be interpreted with caution. Since data from both groups were combined, it was no longer a randomized trial for this specific analysis, so there could be unknown confounding influences. Also, it is possible that those with improved changes in cognition were better able to adhere to the intervention and thus have higher lifestyle indices.

In summary, in persons with mild cognitive impairment or early dementia due to Alzheimer’s disease, comprehensive lifestyle changes may improve cognition and function in several standard measures after 20 weeks. In contrast, patients in the randomized control group showed overall worsening in all four measures of cognition and function during this time.

The validity of these findings was supported by the observed changes in plasma biomarkers and microbiome; the dose-response correlation of the degree of lifestyle change with the degree of improvement in all four measures of cognition and function; and the correlation between the degree of lifestyle change and the degree of changes in the Aβ42/40 ratio and the changes in some other relevant biomarkers in a beneficial direction.

Our findings also have implications for helping to prevent AD. Newer technologies, some aided by artificial intelligence, enable the probable diagnosis of AD years before it becomes clinically apparent. However, many people do not want to know if they are likely to get AD if they do not believe they can do anything about it. If intensive lifestyle changes may cause improvement in cognition and function in MCI or early dementia due to AD, then it is reasonable to think that these lifestyle changes may also help to prevent MCI or early dementia due to AD. Also, it may take less-extensive lifestyle changes to help prevent AD than to treat it. Other studies cited earlier on the effects of these lifestyle changes on diseases such as coronary heart disease support this conclusion. Clearly, intensive lifestyle changes rather than moderate ones seem to be required to improve cognition and function in those suffering from early-stage AD.

These findings support longer follow-up and larger clinical trials to determine the longer-term outcomes of this intensive lifestyle medicine intervention in larger groups of more diverse AD populations; why some patients beneficially respond to a lifestyle intervention better than others besides differences in adherence; as well as the potential synergy of these lifestyle changes and some drug therapies.

Availability of data and materials

The datasets used and/or analyzed during the current study may be available from the corresponding author on reasonable request. Requesters will be asked to submit a study protocol, including the research question, planned analysis, and data required. The authors will evaluate this plan (i.e., relevance of the research question, suitability of the data, quality of the proposed analysis, planned or ongoing analysis, and other matters) on a case-by-case basis.

Livingston G, Huntley J, Sommerlad A, Ames D, Ballard C, Banerjee S, Brayne C, Burns A, Cohen-Mansfield J, Cooper C, Costafreda SG, Dias A, Fox N, Gitlin LN, Howard R, Kales HC, Kivimäki M, Larson EB, Ogunniyi A, Orgeta V, Ritchie K, Rockwood K, Sampson EL, Samus Q, Schneider LS, Selbæk G, Teri L, Mukadam N. Dementia prevention, intervention, and care: 2020 report of the Lancet Commission. Lancet. 2020;396(10248):413–46. https://doi.org/10.1016/S0140-6736(20)30367-6 . (Epub 2020 Jul 30. Erratum in: Lancet. 2023 Sep 30;402(10408):1132. PMID: 327389 PMCID: PMC7392084).

Article   PubMed   PubMed Central   Google Scholar  

Ornish D, Ornish A. UnDo It. New York: Ballantine Books; 2019.

Google Scholar  

Dhana K, Agarwal P, James BD, Leurgans SE, Rajan KB, Aggarwal NT, Barnes LL, Bennett DA, Schneider JA. Healthy Lifestyle and Cognition in Older Adults With Common Neuropathologies of Dementia. JAMA Neurol. 2024. https://doi.org/10.1001/jamaneurol.2023.5491 . Epub ahead of print. PMID: 38315471.

Morris MC, Evans DA, Tangney CC, Bienias JL, Wilson RS. Associations of vegetable and fruit consumption with age-related cognitive change. Neurology. 2006;67(8):1370–6. https://doi.org/10.1212/01.wnl.0000240224.38978.d8 . (PMID:17060562;PMCID:PMC3393520).

Article   CAS   PubMed   Google Scholar  

Morris MC, Evans DA, Bienias JL, Tangney CC, Bennett DA, Aggarwal N, Schneider J, Wilson RS. Dietary fats and the risk of incident Alzheimer disease. Arch Neurol. 2003;60(2):194–200. https://doi.org/10.1001/archneur.60.2.194 . (Erratum.In:ArchNeurol.2003Aug;60(8):1072 PMID: 12580703).

Article   PubMed   Google Scholar  

Yu JT, Xu W, Tan CC, Andrieu S, Suckling J, Evangelou E, Pan A, Zhang C, Jia J, Feng L, Kua EH, Wang YJ, Wang HF, Tan MS, Li JQ, Hou XH, Wan Y, Tan L, Mok V, Tan L, Dong Q, Touchon J, Gauthier S, Aisen PS, Vellas B. Evidence-based prevention of Alzheimer’s disease: systematic review and meta-analysis of 243 observational prospective studies and 153 randomised controlled trials. J Neurol Neurosurg Psychiatry. 2020;91(11):1201–9. https://doi.org/10.1136/jnnp-2019-321913 . (Epub 2020 Jul 20. PMID: 32690803; PMCID: PMC7569385).

Blumenthal JA, Smith PJ, Mabe S, Hinderliter A, Lin PH, Liao L, et al. Lifestyle and neurocognition in older adults with cognitive impairments: A randomized trial. Neurology. 2019;92(3):e212–23. https://doi.org/10.1212/WNL.0000000000006784 . (Epub 2018/12/21. PubMed PMID: 30568005; PubMed Central PMCID: PMCPMC6340382).

Ngandu T, Lehtisalo J, Solomon A, Levälahti E, Ahtiluoto S, Antikainen R, Bäckman L, Hänninen T, Jula A, Laatikainen T, Lindström J, Mangialasche F, Paajanen T, Pajala S, Peltonen M, Rauramaa R, Stigsdotter-Neely A, Strandberg T, Tuomilehto J, Soininen H, Kivipelto M. A 2 year multidomain intervention of diet, exercise, cognitive training, and vascular risk monitoring versus control to prevent cognitive decline in at-risk elderly people (FINGER): a randomised controlled trial. Lancet. 2015;385(9984):2255–63. https://doi.org/10.1016/S0140-6736(15)60461-5 . (Epub 2015 Mar 12 PMID: 25771249).

Rosenberg A, Ngandu T, Rusanen M, Antikainen R, Backman L, Havulinna S, et al. Multidomain lifestyle intervention benefits a large elderly population at risk for cognitive decline and dementia regardless of baseline characteristics: The FINGER trial. Alzheimers Dement. 2018;14(3):263–70. https://doi.org/10.1016/j.jalz.2017.09.006 . (Epub 2017/10/23. PubMed PMID: 29055814).

Solomon A, Turunen H, Ngandu T, Peltonen M, Levalahti E, Helisalmi S, et al. Effect of the apolipoprotein e genotype on cognitive change during a multidomain lifestyle intervention: a subgroup analysis of a randomized clinical trial. JAMA Neurol. 2018;75(4):462–70. https://doi.org/10.1001/jamaneurol.2017.4365 . (Epub 2018/01/23. PubMed PMID: 29356827; PubMed Central PMCID: PMCPMC5885273).

Lehtisalo J, Rusanen M, Solomon A, Antikainen R, Laatikainen T, Peltonen M, et al. Effect of a multi-domain lifestyle intervention on cardiovascular risk in older people: the FINGER trial. Eur Heart J. 2022. https://doi.org/10.1093/eurheartj/ehab922 . Epub 2022/01/21. PubMed PMID: 35051281.

Kivipelto M, Mangialasche F, Snyder HM, Allegri R, Andrieu S, Arai H, et al. World-Wide FINGERS Network: a global approach to risk reduction and prevention of dementia. Alzheimers Dement. 2020;16(7):1078–94. https://doi.org/10.1002/alz.12123 . (Epub 2020/07/07. PubMed PMID: 32627328).

Kivipelto M, Mangialasche F, Snyder H M, Allegri R, Andrieu S, Arai H, Baker L, Belleville S, Brodaty H, Brucki SM, Calandri I, Caramelli P, Chen C, Chertkow H, Chew E, Choi S H, Chowdhary N, Crivelli L, De La Torre R, Du Y, Dua T, Espeland M, Feldman H H, Hartmanis M, Hartmann T, Heffernan M, Henry C J, Hong C H, Håkansson K, Iwatsubo T, Jeong J H, Jimenez‐Maggiora G, Koo E H, Launer L J, Lehtisalo J, Lopera F, Martínez‐Lage P, Martins R, Middleton L, Molinuevo J L, Montero‐Odasso M, Moon S Y, Morales‐Pérez K, Nitrini R, Nygaard H B, Park Y K, Peltonen M, Qiu C, Quiroz Y T, Raman R, Rao N, Ravindranath V, Rosenberg A, Sakurai T, Salinas R M, Scheltens P, Sevlever G, Soininen H, Sosa A L, Suemoto C K, Tainta‐Cuezva M, Velilla L, Wang Y, Whitmer R, Xu X, Bain L J, Solomon A, Ngandu T, Carillo, M C. World‐Wide FINGERS Network: A global approach to risk reduction and prevention of dementia. Alzheimer's Dement. 2020, https://doi.org/10.1002/alz.12123 .

Yaffe K, Vittinghoff E, Dublin S, Peltz CB, Fleckenstein LE, Rosenberg DE, Barnes DE, Balderson BH, Larson EB. Effect of personalized risk-reduction strategies on cognition and dementia risk profile among older adults: the SMARRT randomized clinical trial. JAMA Intern Med. 2023:e236279. https://doi.org/10.1001/jamainternmed.2023.6279 . Epub ahead of print. PMID: 38010725; PMCID: PMC10682943

Ornish D, Scherwitz LW, Billings JH, Brown SE, Gould KL, Merritt TA, Sparler S, Armstrong WT, Ports TA, Kirkeeide RL, Hogeboom C, Brand RJ. Intensive lifestyle changes for reversal of coronary heart disease. JAMA. 1998;280(23):2001–7. https://doi.org/10.1001/jama.280.23.2001 . (Erratum.In:JAMA1999Apr21;281(15):1380 PMID: 9863851).

Ornish D, Scherwitz LW, Doody RS, Kesten D, McLanahan SM, Brown SE, DePuey E, Sonnemaker R, Haynes C, Lester J, McAllister GK, Hall RJ, Burdine JA, Gotto AM Jr. Effects of stress management training and dietary changes in treating ischemic heart disease. JAMA. 1983;249(1):54–9 (PMID: 6336794).

Gould KL, Ornish D, Scherwitz L, Brown S, Edens RP, Hess MJ, Mullani N, Bolomey L, Dobbs F, Armstrong WT, et al. Changes in myocardial perfusion abnormalities by positron emission tomography after long-term, intense risk factor modification. JAMA. 1995;274(11):894–901. https://doi.org/10.1001/jama.1995.03530110056036 . (PMID: 7674504).

Dhana K, Evans DA, Rajan KB, Bennett DA, Morris MC. Healthy lifestyle and the risk of Alzheimer dementia: Findings from 2 longitudinal studies. Neurology. 2020;95(4):e374–83. https://doi.org/10.1212/WNL.0000000000009816 . (Epub 2020 Jun 17. PMID: 32554763; PMCID: PMC7455318).

Article   CAS   PubMed   PubMed Central   Google Scholar  

McKhann GM, Knopman DS, Chertkow H, Hyman BT, Jack CR Jr, Kawas CH, Klunk WE, Koroshetz WJ, Manly JJ, Mayeux R, Mohs RC, Morris JC, Rossor MN, Scheltens P, Carrillo MC, Thies B, Weintraub S, Phelps CH. The diagnosis of dementia due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement. 2011;7(3):263–9. https://doi.org/10.1016/j.jalz.2011.03.005 . (Epub 2011 Apr 21. PMID: 21514250; PMCID: PMC3312024).

Albert MS, DeKosky ST, Dickson D, Dubois B, Feldman HH, Fox NC, et al. The diagnosis of mild cognitive impairment due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement. 2011;7:270–9.

McDonald K, Seltzer E, Lu M, Gaisenband SD, Fletcher C, McLeroth P, Saini KS. Quantifying the impact of the COVID-19 pandemic on clinical trial screening rates over time in 37 countries. Trials. 2023;24(1):254. https://doi.org/10.1186/s13063-023-07277-1 . (PMID:37013558;PMCID:PMC10071259).

Tang HY, Vitiello MV, Perlis M, Mao JJ, Riegel B. A pilot study of audio-visual stimulation as a self-care treatment for insomnia in adults with insomnia and chronic pain. Appl Psychophysiol Biofeedback. 2014;39(3–4):219–25. https://doi.org/10.1007/s10484-014-9263-8 . (PMID:25257144;PMCID:PMC4221414).

Horsley K. Unlimited Memory. Granger Indiana: TCK Publishing; 2016.

Morris MC, Evans DA, Bienias JL, Tangney CC, Bennett DA, Wilson RS, et al. Consumption of fish and n-3 fatty acids and risk of incident Alzheimer disease. Arch Neurol. 2003;60(7):940–6. https://doi.org/10.1001/archneur.60.7.940 . (Epub 2003/07/23. PubMed PMID: 12873849).

Voulgaropoulou SD, van Amelsvoort T, Prickaerts J, Vingerhoets C. The effect of curcumin on cognition in Alzheimer’s disease and healthy aging: A systematic review of pre-clinical and clinical studies. Brain Res. 2019;1725:146476. https://doi.org/10.1016/j.brainres.2019.146476 . Epub 2019/09/29. PubMedPMID:31560864.

Ringman JM, Frautschy SA, Teng E, Begum AN, Bardens J, Beigi M, Gylys KH, Badmaev V, Heath DD, Apostolova LG, Porter V, Vanek Z, Marshall GA, Hellemann G, Sugar C, Masterman DL, Montine TJ, Cummings JL, Cole GM. Oral curcumin for Alzheimer’s disease: tolerability and efficacy in a 24-week randomized, double blind, placebo-controlled study. Alzheimers Res Ther. 2012;4(5):43. https://doi.org/10.1186/alzrt146 . (PMID:23107780;PMCID:PMC3580400).

Shea TB, Remington R. Nutritional supplementation for Alzheimer’s disease? Curr Opin Psychiatry. 2015;28(2):141–7. https://doi.org/10.1097/YCO.0000000000000138 . (Epub 2015/01/21. PubMed PMID: 25602242).

Pradhan N, Singh C, Singh A. Coenzyme Q10 a mitochondrial restorer for various brain disorders. Naunyn Schmiedebergs Arch Pharmacol. 2021;394(11):2197–222. https://doi.org/10.1007/s00210-021-02161-8 . (Epub 2021/10/02 PubMed PMID: 34596729).

Harrison FE. A critical review of vitamin C for the prevention of age-related cognitive decline and Alzheimer’s disease. J Alzheimers Dis. 2012;29(4):711–26. https://doi.org/10.3233/JAD-2012-111853 . (Epub 2012/03/01. PubMed PMID: 22366772; PubMed Central PMCID: PMCPMC3727637).

Lauer AA, Grimm HS, Apel B, Golobrodska N, Kruse L, Ratanski E, et al. Mechanistic Link between Vitamin B12 and Alzheimer's Disease. Biomolecules. 2022;12(1). https://doi.org/10.3390/biom12010129 . Epub 2022/01/22. PubMed PMID: 35053277; PubMed Central PMCID: PMCPMC8774227.

Du K, Zheng X, Ma ZT, Lv JY, Jiang WJ, Liu MY. Association of Circulating Magnesium Levels in Patients With Alzheimer’s Disease From 1991 to 2021: A Systematic Review and Meta-Analysis. Front Aging Neurosci. 2021;13:799824. https://doi.org/10.3389/fnagi.2021.799824 . (Epub 2022/01/28. PubMed PMID: 35082658; PubMed Central PMCID: PMCPMC8784804).

Saitsu Y, Nishide A, Kikushima K, Shimizu K, Ohnuki K. Improvement of cognitive functions by oral intake of Hericium erinaceus. Biomed Res. 2019;40(4):125–31. https://doi.org/10.2220/biomedres.40.125 . (Epub 2019/08/16. PubMed PMID: 31413233).

Mori K, Inatomi S, Ouchi K, Azumi Y, Tuchida T. Improving effects of the mushroom Yamabushitake (Hericium erinaceus) on mild cognitive impairment: a double-blind placebo-controlled clinical trial. Phytother Res. 2009;23(3):367–72. https://doi.org/10.1002/ptr.2634 . (Epub 2008/10/11. PubMed PMID: 18844328).

Xiang S, Ji JL, Li S, Cao XP, Xu W, Tan L, et al. Efficacy and Safety of probiotics for the treatment of alzheimer’s disease, mild cognitive impairment, and Parkinson’s Disease: a systematic review and meta-analysis. Front Aging Neurosci. 2022;14:730036. https://doi.org/10.3389/fnagi.2022.730036 . (Epub 2022/02/22. PubMed PMID: 35185522; PubMed Central PMCID: PMCPMC8851038).

Fogelman I, West T, Braunstein JB, Verghese PB, Kirmess KM, Meyer MR, Contois JH, Shobin E, Ferber KL, Gagnon J, Rubel CE, Graham D, Bateman RJ, Holtzman DM, Huang S, Yu J, Yang S, Yarasheski KE. Independent study demonstrates amyloid probability score accurately indicates amyloid pathology. Ann Clin Transl Neurol. 2023;10(5):765–78. https://doi.org/10.1002/acn3.51763 . (Epub 2023 Mar 28. PMID: 36975407; PMCID: PMC10187729).

Exploratory data analysis. John W. Tukey, 1977. Addison-Wesley, Reading MA. https://doi.org/10.1002/bimj.4710230408 .

Zhuang Z, Yang R, Wang W, Qi L, Huang T. Associations between gut microbiota and Alzheimer’s disease, major depressive disorder, and schizophrenia. J Neuroinflammation. 2020;17(1):288. https://doi.org/10.1186/s12974-020-01961-8 . (PMID:33008395;PMCID:PMC7532639).

Cammann D, Lu Y, Cummings MJ, Zhang ML, Cue JM, Do J, Ebersole J, Chen X, Oh EC, Cummings JL, Chen J. Genetic correlations between Alzheimer’s disease and gut microbiome genera. Sci Rep. 2023;13(1):5258. https://doi.org/10.1038/s41598-023-31730-5 . (PMID:37002253;PMCID:PMC10066300).

Borsom EM, Conn K, Keefe CR, Herman C, Orsini GM, Hirsch AH, Palma Avila M, Testo G, Jaramillo SA, Bolyen E, Lee K, Caporaso JG, Cope EK. Predicting Neurodegenerative Disease Using Prepathology Gut Microbiota Composition: a Longitudinal Study in Mice Modeling Alzheimer’s Disease Pathologies. Microbiol Spectr. 2023;11(2):e0345822. https://doi.org/10.1128/spectrum.03458-22 . (Epub ahead of print. PMID: 36877047; PMCID: PMC10101110).

van Dyck CH, Swanson CJ, Aisen P, Bateman RJ, Chen C, Gee M, Kanekiyo M, Li D, Reyderman L, Cohen S, Froelich L, Katayama S, Sabbagh M, Vellas B, Watson D, Dhadda S, Irizarry M, Kramer LD, Iwatsubo T. Lecanemab in Early Alzheimer’s Disease. N Engl J Med. 2023;388(1):9–21. https://doi.org/10.1056/NEJMoa2212948 . (Epub 2022 Nov 29 PMID: 36449413).

Ornish D, Scherwitz LW, Billings JH, Brown SE, Gould KL, Merritt TA, Sparler S, Armstrong WT, Ports TA, Kirkeeide RL, Hogeboom C, Brand RJ. Intensive lifestyle changes for reversal of coronary heart disease. JAMA. 1998;280(23):2001–7. https://doi.org/10.1001/jama.280.23.2001 .

Ornish D, Weidner G, Fair WR, Marlin R, Pettengill EB, Raisin CJ, Dunn-Emke S, Crutchfield L, Jacobs FN, Barnard RJ, Aronson WJ, McCormac P, McKnight DJ, Fein JD, Dnistrian AM, Weinstein J, Ngo TH, Mendell NR, Carroll PR. Intensive lifestyle changes may affect the progression of prostate cancer. J Urol. 2005;174(3):1065–9. https://doi.org/10.1097/01.ju.0000169487.49018.73 . (discussion 1069-70. PMID: 16094059).

Ornish D, Lin J, Chan JM, Epel E, Kemp C, Weidner G, Marlin R, Frenda SJ, Magbanua MJM, Daubenmier J, Estay I, Hills NK, Chainani-Wu N, Carroll PR, Blackburn EH. Effect of comprehensive lifestyle changes on telomerase activity and telomere length in men with biopsy-proven low-risk prostate cancer: 5-year follow-up of a descriptive pilot study. Lancet Oncol. 2013;14(11):1112–20. https://doi.org/10.1016/S1470-2045(13)70366-8 . (Epub 2013 Sep 17 PMID: 24051140).

Kivipelto M et al. Multimodal preventive trial for Alzheimer’s disease. Alzheimer’s Dement. 2021;17(Suppl.10):e056105. https://alz-journals.onlinelibrary.wiley.com/doi/abs/10.1002/alz.056105 .

Ornish D, Brown SE, Scherwitz LW, Billings JH, Armstrong WT, Ports TA, McLanahan SM, Kirkeeide RL, Brand RJ, Gould KL. Can lifestyle changes reverse coronary heart disease? The Lifestyle Heart Trial. Lancet. 1990;336(8708):129–33. https://doi.org/10.1016/0140-6736(90)91656-u . (PMID: 1973470).

Download references

Acknowledgements

We are grateful to each of the following people who made this study possible. Paramount among these are all of the study participants and their spouse or support person. Their commitment was inspiring, and without them this study would not have been possible. Each of the staff who provided and supported this program is exceptionally caring and competent, and includes: Heather Amador, who coordinated and administered all grants and infrastructure; Tandis Alizadeh, who is chief of staff; as well as Lynn Sievers, Nikki Liversedge, Pamela Kimmel, Stacie Dooreck, Antonella Dewell, Stacey Dunn-Emke, Marie Goodell, Emily Dougherty, Kamala Berrio, Kristin Gottesman, Katie Mayers, Dennis Malone, Sarah & Mary Barber, Steven Singleton, Kevin Lane, Laurie Case, Amber O’Neill, Annie DiRocco, Alison Eastwood, Sara Henley, Sousha Naghshineh, Sarah Reinhard, Laura Kandell, Alison Haag, Sinead Lafferty, Haley Perkins, Chase Delaney, Danielle Marquez, Ava Hoffman, Sienna Lopez, and Sophia Gnuse. Dr. Caitlin Moore conducted much of the cognition and function testing along with Dr. Catherine Madison, Trevor Ragas, Andrea Espinosa, Lorraine Martinez, Davor Zink, Jeff Webb, Griffin Duffy, Lauren Sather, and others. Dr. Cecily Jenkins trained the ADAS-Cog rater. Dr. Jan Krumsiek and Dr. Richa Batra performed important analyses in Dr. Rima Kaddurah-Daouk’s lab. Dr. Pia Kivisåkk oversaw biomarker assays in Dr. Steven Arnold's lab. We are grateful to all of the referring neurologists. Board members of the nonprofit Preventive Medicine Research Institute provided invaluable oversight and support, including Henry Groppe, Jenard & Gail Gross, Ken Hubbard, Brock Leach, and Lee Stein, as well as Joel Goldman.

Author’s information

DO is the corresponding author. RT contributed as the senior author.

We are very grateful to Leonard A. Lauder & Judith Glickman Lauder; Gary & Laura Lauder; Howard Fillit and Mark Roithmayr of The Alzheimer’s Drug Discovery Foundation; Mary & Patrick Scanlan of the Mary Bucksbaum Scanlan Family Foundation; Laurene Powell Jobs/Silicon Valley Community Foundation; Pierre & Pamela Omidyar Fund/Silicon Valley Community Foundation (Pat Christen and Jeff Alvord); George Vradenburg Foundation/Us Against Alzheimer’s; American Endowment Foundation (Anna & James McKelvey); Arthur M. Blank Family Foundation/Around the Table Foundation (Elizabeth Brown, Natalie Gilbert, Christian Amica); John Paul & Eloise DeJoria Peace Love & Happiness Foundation (Constance Dykhuizen); Maria Shriver/Women’s Alzheimer’s Movement (Sandy Gleysteen, Laurel Ann Gonsecki, Erin Stein); Mark Pincus Family Fund/Silicon Valley Community Foundation; Christy Walton/Walton Family Foundation; Milken Family Foundation; The Cleveland Clinic Lou Ruvo Center for Brain Health (Larry Ruvo); Jim Greenbaum Foundation; R. Martin Chavez; Wonderful Company Foundation (Stewart & Lynda Resnick); Daniel Socolow; Anthony J. Robbins/Tony Robbins Foundation; John Mackey; John & Lisa Pritzker and the Lisa Stone Pritzker Family Foundation; Ken Hubbard; Greater Houston Community Foundation (Jenard & Gail Gross); Henry Groppe; Brock & Julie Leach Family Charitable Foundation; Bucksbaum/Baum Foundation (Glenn Bucksbaum & April Minnich); YPO Gold Los Angeles; Lisa Holland/Betty Robertson; the Each Foundation (Lionel Shaw); Moby Charitable Fund; California Relief Program; Gary & Lisa Schildhorn; McNabb Foundation (Ricky Rafner); Renaissance Charitable Foumdation (Stephen & Karen Slinkard); Network for Good; Ken & Kim Raisler Foundation; Miner Foundation; Craiglist Charitable Fund (Jim Buckmaster and Annika Joy Quist); Gaurav Kapadia; Healing Works Foundation/Wayne Jonas; and the Center for Innovative Medicine (CIMED) at the Karolinska Institutet, Hjärnfonden, Stockholms Sjukhem, Research Council for Health Working Life and Welfare (FORTE). In-kind donations were received from Alan & Rob Gore of Body Craft Recreation Supply (exercise equipment), Dr. Andrew Abraham of Orgain, Paul Stamets of Fungi Perfecta ( Host Defense Lion’s Mane), Nordic Naturals, and Flora. Dr. Rima Kaddurah-Daouk at Duke is PI of the Alzheimer Gut Microbiome Project (funded by NIA U19AG063744). She also received additional funding from NIA that has enabled her research (U01AG061359 & R01AG081322).

The funders had no role in the conceptualization; study design; data collection; analysis; and interpretation; writing of the report; or the decision to submit for publication.

Author information

Authors and affiliations.

Preventive Medicine Research Institute, 900 Bridgeway, Sausalito, CA, USA

Dean Ornish, Catherine Madison, Anne Ornish, Nancy DeLamarter, Noel Wingers & Carra Richling

University of California, San Francisco and University of California, San Diego, USA

Dean Ornish

Ray Dolby Brain Health Center, California Pacific Medical Center, San Francisco, CA, USA

Catherine Madison

Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institute, Karolinska vägen 37 A, SE-171 64, Solna, Sweden

Miia Kivipelto

Theme Inflammation and Aging, Karolinska University Hospital, Karolinska vägen 37 A, SE-171 64, Stockholm, Solna, Sweden

The Ageing Epidemiology (AGE) Research Unit, School of Public Health, Imperial College London, St Mary’s Hospital, Norfolk Place, London, W2 1PG, United Kingdom

Institute of Public Health and Clinical Nutrition, University of Eastern Finland, Yliopistonranta 8, 70210, Kuopio, Finland

Clinical Services, Preventive Medicine Research Institute, Bridgeway, Sausalito, CA, 900, USA

Colleen Kemp & Sarah Tranter

Division of Biostatistics, Department of Epidemiology & Biostatistics, UCSF, San Francisco, CA, USA

Charles E. McCulloch

Neurosciences, University of California, San Diego, CA, USA

Douglas Galasko

Clinical Neurology, School of Medicine, University of Nevada, Reno, USA

Renown Health Institute of Neurosciences, Reno, NV, USA

Harvard Medical School, Boston, MA, USA

Dorene Rentz, Rudolph E. Tanzi & Steven E. Arnold

Center for Alzheimer Research and Treatment, Boston, MA, USA

Dorene Rentz

Mass General Brigham Alzheimer Disease Research Center, Boston, MA, USA

Elizabeth Blackburn Lab, UCSF, San Francisco, CA, USA

UCSF, San Francisco, CA, USA

Departments of Medicine and Psychiatry, Duke University Medical Center and Member, Duke Institute of Brain Sciences, Durham, NC, USA

Rima Kaddurah-Daouk

Department of Pediatrics; Department of Computer Science & Engineering; Department of Bioengineering; Center for Microbiome Innovation, Halıcıoğlu Data Science Institute, University of California, San Diego, La Jolla, CA, USA

Department of Pediatrics and Scientific Director, American Gut Project and The Microsetta Initiative, University of California San Diego, La Jolla, CA, USA

Daniel McDonald

Bioinformatics and Systems Biology Program; Rob Knight Lab; Medical Scientist Training Program, University of California, San Diego, La Jolla, CA, USA

Lucas Patel

Buck Institute for Research on Aging, San Francisco, CA, USA

Eric Verdin

University of California, San Francisco, CA, USA

Genetics and Aging Research Unit, Boston, MA, USA

Rudolph E. Tanzi

McCance Center for Brain Health, Boston, MA, USA

Massachusetts General Hospital, Boston, MA, USA

Interdisciplinary Brain Center, Massachusetts General Hospital, Boston, MA, USA

Steven E. Arnold

You can also search for this author in PubMed   Google Scholar

Contributions

DO, CM, MK, CK, DG, JA, DR, CEM, JL, KN, AO, ST, ND, NW, CR, RKD, RK, EV, RT, and SEA were involved in the study design and conduct. DO conceptualized the study hypotheses (building on the work of MK), obtained funding, prepared the first draft of the manuscript, and is the principal investigator. CEM oversaw the statistical analyses and interpretation, and DR oversaw the cognition and function testing and interpretation. CK and ST oversaw all clinical operations and patient recruitment, including the IRB. JL conducted the telomere analyses. CM oversaw patient selection. AO developed the learning management system and community platform for patients and providers. KN managed an IRB. ND co-led most of the support groups, and CR oversaw all aspects involving nutrition. All authors participated in writing the manuscript. NW and ST oversaw data collection and prepared the databases other than the microbiome databases which were overseen by RK and prepared by DM and LP who helped design this part of the study. CM, CK, JL, RKD, RK, DM, and LP were involved in the acquisition of data. SA, RT, and RKD did biomarker analyses. All authors contributed to critical review of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Dean Ornish .

Ethics declarations

Competing interests.

MK is one of the Editors-in-Chief of this journal and has no relevant competing interests and recused herself from the review process. RKD is an inventor on key patents in the field of metabolomics and holds equity in Metabolon, a biotech company in North Carolina. In addition, she holds patents licensed to Chymia LLC and PsyProtix with royalties and ownership. DO and AO have consulted for Sharecare and have received book royalties and lecture honoraria and, with CK, have received equity in Ornish Lifestyle Medicine. RK is a scientific advisory board member and consultant for BiomeSense, Inc., has equity and receives income. He is a scientific advisory board member and has equity in GenCirq. He is a consultant and scientific advisory board member for DayTwo, and receives income. He has equity in and acts as a consultant for Cybele. He is a co-founder of Biota, Inc., and has equity. He is a cofounder of Micronoma, and has equity and is a scientific advisory board member. The terms of these arrangements have been reviewed and approved by the University of California, San Diego in accordance with its conflict of interest policies. DM is a consultant for BiomeSense. RT is a co-founder and equity holder in Hyperion Rx, which produces the flashing-light glasses at a theta frequency of 7.83 Hz used as an optional aid to meditation. The rest of the authors declare that they have no competing interests.

Ethics approval and consent to participate

This clinical trial was approved by the Western Institutional Review Board on 12/31/2017 (approval number: 20172897) and all participants and their study partners provided written informed consent. The trial protocol was also approved by the appropriate Institutional Review Board of all participating sites; and all subjects provided informed consent.

Consent for publication

Informed consent was received from all patients. All data from research participants described in this paper is de-identified.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1. , rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ornish, D., Madison, C., Kivipelto, M. et al. Effects of intensive lifestyle changes on the progression of mild cognitive impairment or early dementia due to Alzheimer’s disease: a randomized, controlled clinical trial. Alz Res Therapy 16 , 122 (2024). https://doi.org/10.1186/s13195-024-01482-z

Download citation

Received : 21 February 2024

Accepted : 15 May 2024

Published : 07 June 2024

DOI : https://doi.org/10.1186/s13195-024-01482-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Alzheimer’s
  • Lifestyle medicine
  • Social support

Alzheimer's Research & Therapy

ISSN: 1758-9193

clinical research method

IMAGES

  1. Certified Workshop on Clinical Research & Methodology

    clinical research method

  2. Medical research methodology

    clinical research method

  3. Introduction to clinical research

    clinical research method

  4. Clinical research process

    clinical research method

  5. An overview of clinical research: the lay of the land

    clinical research method

  6. The Clincial Trial Process

    clinical research method

VIDEO

  1. 1-3- Types of Clinical Research

  2. Clinical Trials Registration & Results Reporting & Data Sharing Part 4 of 4

  3. Clinical Research vs. Clinical Trials #clinicaltrials #drugdiscovery #drugapproval

  4. What is a Clinical Trial?

  5. Introduction to Research

  6. clinical lab method chapter 1 part 2 Lecture for BSc and comprehensive nursing ተስፋ

COMMENTS

  1. Clinical Research What is It

    Clinical research is the comprehensive study of the safety and effectiveness of the most promising advances in patient care. Clinical research is different than laboratory research. It involves people who volunteer to help us better understand medicine and health. Lab research generally does not involve people — although it helps us learn ...

  2. Methodology for clinical research

    Similar in essence, clinical research methods differ somewhat, depending on the type of study. Type is an integral element of study design and depends on the research question to answer. It should be specified before the start of any study . Selecting an inappropriate study type results in flawed methodology, and if it occurs after commencement ...

  3. Planning and Conducting Clinical Research: The Whole Process

    The goal of this review was to present the essential steps in the entire process of clinical research. Research should begin with an educated idea arising from a clinical practice issue. ... Springer's Journal Author Academy, and SAGE's Research methods [34-37]. Standardized research reporting guidelines often come in the form of checklists ...

  4. Clinical research study designs: The essentials

    In clinical research, our aim is to design a study which would be able to derive a valid and meaningful scientific conclusion using appropriate statistical methods. The conclusions derived from a research study can either improve health care or result in inadvertent harm to patients. Hence, this requires a well‐designed clinical research ...

  5. What Are the Different Types of Clinical Research?

    Below are descriptions of some different kinds of clinical research. Treatment Research generally involves an intervention such as medication, psychotherapy, new devices, or new approaches to ...

  6. About Clinical Studies

    What is clinical research? Clinical research is a process to find new and better ways to understand, detect, control and treat health conditions. The scientific method is used to find answers to difficult health-related questions. Ways to participate. There are many ways to participate in clinical research at Mayo Clinic.

  7. Clinical research

    The term "clinical research" refers to the entire process of studying and writing about a drug, a medical device or a form of treatment, which includes conducting interventional studies ( clinical trials) or observational studies on human participants. [1] [3] Clinical research can cover any medical method or product from its inception in the ...

  8. Principles of Research Methodology: A Guide for Clinical Investigators

    Principles of Research Methodology: A Guide for Clinical Investigators is the definitive, comprehensive guide to understanding and performing clinical research. Designed for medical students, physicians, basic scientists involved in translational research, and other health professionals, this indispensable reference also addresses the unique challenges and demands of clinical research and ...

  9. A tutorial on methodological studies: the what, when, how and why

    Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. We provide an overview of some of the key aspects of methodological studies such ...

  10. Clinical Research Methodology 1: Study Designs and ...

    Abstract. Clinical research can be categorized by the timing of data collection: retrospective or prospective. Clinical research also can be categorized by study design. In case-control studies, investigators compare previous exposures (including genetic and other personal factors, environmental influences, and medical treatments) among groups ...

  11. JAMA Guide to Statistics and Methods

    JAMA. Review. December 12, 2022. This Guide to Statistics and Methods describes the use of target trial emulation to design an observational study so it preserves the advantages of a randomized clinical trial, points out the limitations of the method, and provides an example of its use. Research, Methods, Statistics.

  12. Handbook for Good Clinical Research Practice (Gcp)

    Clinical trials - methods. 2. Biomedical research - methods. 3. Ethics, Research. 4. Manuals. I. World Health Organization. ISBN 92 4 159392 X (NLM classifi cation: W 20.5) Contents Preamble 1 Introduction 3 ... Clinical research is necessary to establish the safety and effective-ness of specifi c health and medical products and practices ...

  13. Clinical Research Methodology 1: Study Designs and Methodolo ...

    Clinical research can be categorized by the timing of data collection: retrospective or prospective. Clinical research also can be categorized by study design. In cross-sectional studies, exposure and outcome are evaluated simultaneously. In case-control studies, investigators compare previous exposures (including genetic and other personal ...

  14. What are the different types of clinical research?

    Transcript. ANNOUNCER: There are many different types of clinical research because researchers study many different things. Treatment research usually tests an intervention such as medication, psychotherapy, new devices, or new approaches. Prevention research looks for better ways to prevent disorders from developing or returning.

  15. Clinical Research Methods

    The Clinical Research Methods (CRM) track in Biostatistics responds to a pressing need for advanced training in clinical research design and analysis. As medical school curricula become increasingly full and apprenticeship prospects wane, pathways to becoming a clinical researcher have narrowed. This program offers talented-but-novice ...

  16. Home

    The Research Methods Resources website provides investigators with important research methods resources to help them design their studies using the best available methods. The material is relevant to both randomized and non-randomized trials, human and animal studies, and basic and applied research. ... Experiments, including clinical trials ...

  17. Clinical Trials and Clinical Research: A Comprehensive Review

    Experimental research is alternatively known as the true type of research wherein the research is conducted by the intervention of a new drug/device/method (educational research). Most true experiments use randomized control trials that remove bias and neutralize the confounding variables that may interfere with the results of research [ 28 ].

  18. Clinical research methods for treatment, diagnosis, prognosis, etiology

    This narrative review is an introduction for health professionals on how to conduct and report clinical research on six categories: treatment, diagnosis/differential diagnosis, prognosis, etiology, screening, and prevention. The importance of beginning with an appropriate clinical question and the e …

  19. Foundations of Clinical Research

    Foundations of Clinical Research. This Harvard Medical School six-month, application-based certificate program provides the essential skill sets and fundamental knowledge required to begin or expand your clinical research career. Learn More. September 28, 2024 - April 6, 2025. $6,900 - $7,900.

  20. Why Should the FDA Focus on Pragmatic Clinical Research?

    This Viewpoint from the FDA discusses how pragmatic clinical research—assessment that uses real-world data, often in combination with research data, after initial marketing approval—can help in evaluation of new technologies, benefit research sites in underresourced settings, and better inform...

  21. Clinical Research Methods

    Clinical Research Methods. Director: Todd Ogden, PhD. The Mailman School offers the degree of Master of Science in Biostatistics, with an emphasis on issues in the statistical analysis and design of clinical studies. The Clinical Research Methods track was conceived and designed for clinicians who are pursuing research careers in academic medicine.

  22. Clinical Research Methodology Curriculum

    The Clinical Research Methodology Curriculum is currently accepting applications for the 2023-2024 academic year. The application deadline to submit is Friday, August 18, 2023 at 5:00PM. View application instructions and eligibility criteria. The Clinical Research Methodology Curriculum (CRMC) is a one-year clinical research methodology for ...

  23. Case Study Research Method in Psychology

    Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews). The case study research method originated in clinical medicine (the case history, i.e., the patient's personal history). In psychology, case studies are ...

  24. An Assessment of Clinical Research Self-Efficacy among Researchers at

    The authors observed a significant increase in clinical research self-efficacy 1 year after clinical research training, 15 which supports the hypothesis that participation in a research training program can contribute to the development of clinical research self-efficacy and eventually translate into better research outcomes. 16,17 Robinson et ...

  25. Brain Metabolic Imaging by Magnetic Resonance Imaging and ...

    This Research Topic is the second volume of the Research Topic "Brain Metabolic Imaging by Magnetic Resonance Imaging and Spectroscopy: Methods and Clinical Applications". Please see the first volume here. Brain metabolism reveals pathways by which neuronal and glial cells use nutrients ...

  26. Current status and ongoing needs for the teaching and assessment of

    Clinical reasoning (CR) is a crucial ability that can prevent errors in patient care. Despite its important role, CR is often not taught explicitly and, even when it is taught, typically not all aspects of this ability are addressed in health professions education. Recent research has shown the need for explicit teaching of CR for both students and teachers.

  27. Analyzing the FDA's Approach to Diversity In Clinical Trials

    Additionally, mistrust of clinical research among certain populations also impacts enrollment; Discussing methods for broadening eligibility criteria to clinical trials of drugs intended to treat rare diseases or conditions. Early engagement with patient advocacy groups, experts and patients with the disease to solicit feedback regarding trial ...

  28. Clinical Research Methodology I: Introduction to Randomized Trials

    The World Health Organization defines a clinical trial as "any research study that prospectively assigns human participants or groups of humans to one or more health-related interventions to evaluate the effects on health outcomes." 1 Randomization refers to the method of assignment of the intervention or comparison (s).

  29. Effects of intensive lifestyle changes on the progression of mild

    Background Evidence links lifestyle factors with Alzheimer's disease (AD). We report the first randomized, controlled clinical trial to determine if intensive lifestyle changes may beneficially affect the progression of mild cognitive impairment (MCI) or early dementia due to AD. Methods A 1:1 multicenter randomized controlled phase 2 trial, ages 45-90 with MCI or early dementia due to AD ...

  30. Clinical research study designs: The essentials

    Introduction. In clinical research, our aim is to design a study, which would be able to derive a valid and meaningful scientific conclusion using appropriate statistical methods that can be translated to the "real world" setting. 1 Before choosing a study design, one must establish aims and objectives of the study, and choose an appropriate target population that is most representative of ...