• Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

37k Accesses

52 Citations

58 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

techniques used in clinical research

National Institutes of Health (NIH) - Turning Discovery into Health

Research Methods Resources

Methods at a glance.

This section provides information and examples of methodological issues to be aware of when working with different study designs. Virtually all studies face methodological issues regarding the selection of the primary outcome(s), sample size estimation, missing outcomes, and multiple comparisons. Randomized studies face additional challenges related to the method for randomization. Other studies face specific challenges associated with their study design such as those that arise in effectiveness-implementation research; multiphase optimization strategy (MOST) studies; sequential, multiple assignment, randomized trials (SMART); crossover designs; non-inferiority trials; regression discontinuity designs; and paired availability designs. Some face issues involving exact tests, adherence to behavioral interventions, noncompliance in encouragement designs, evaluation of risk prediction models, or evaluation of surrogate endpoints.

Learn more about broadly applicable methods

Experiments, including clinical trials, differ considerably in the methods used to assign participants to study conditions (or study arms) and to deliver interventions to those participants.

This section provides information related to the design and analysis of experiments in which (1) participants are assigned in groups (or clusters) and individual observations are analyzed to evaluate the effect of the intervention, (2) participants are assigned individually but receive at least some of their intervention with other participants or through an intervention agent shared with other participants, and (3) participants are assigned in groups (or clusters) but groups cross-over to the intervention condition at pre-determined time points in sequential, staggered fashion until all groups receive the intervention.

This material is relevant for both human and animal studies as well as basic and applied research. And while it is important for investigators to become familiar with the issues presented on this website, it is even more important that they collaborate with a methodologist who is familiar with these issues.

In a parallel group-randomized trial, also called a parallel cluster-randomized trial, groups or clusters are randomized to study conditions, and observations are taken on the members of those groups with no crossover of groups or clusters to a different condition or study arm during the trial.  

Learn more about GRTs

In an individually randomized group-treatment trial, also called a partially clustered design, individuals are randomized to study conditions but receive at least some of their intervention with other participants or through an intervention agent shared with other participants.

Learn more about IRGTs

In a stepped wedge group-randomized trial, also called a stepped wedge cluster-randomized trial, groups or clusters are randomized to sequences which cross-over to the intervention condition at predetermined time points in a sequential, staggered fashion until all groups receive the intervention.

Learn more about SWGRTs

NIH Clinical Trial Requirements

The NIH launched a series of initiatives to enhance the accountability and transparency of clinical research. These initiatives target key points along the entire clinical trial lifecycle, from concept to reporting the results.

Check out the  Frequently Asked Questions  section or send us a message . 

Disclaimer: Substantial effort has been made to provide accurate and complete information on this website. However, we cannot guarantee that there will be no errors. Neither the U.S. Government nor the National Institutes of Health (NIH) assumes any legal liability for the accuracy, completeness, or usefulness of any information, products, or processes disclosed herein, or represents that use of such information, products, or processes would not infringe on privately owned rights. The NIH does not endorse or recommend any commercial products, processes, or services. The views and opinions of authors expressed on NIH websites do not necessarily state or reflect those of the U.S. Government, and they may not be used for advertising or product endorsement purposes.

  • Skip to main content
  • Skip to FDA Search
  • Skip to in this section menu
  • Skip to footer links

U.S. flag

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

U.S. Food and Drug Administration

  •   Search
  •   Menu
  • For Patients
  • Clinical Trials: What Patients Need to Know

What Are the Different Types of Clinical Research?

Different types of clinical research are used depending on what the researchers are studying. Below are descriptions of some different kinds of clinical research.

Treatment Research generally involves an intervention such as medication, psychotherapy, new devices, or new approaches to surgery or radiation therapy. 

Prevention Research looks for better ways to prevent disorders from developing or returning. Different kinds of prevention research may study medicines, vitamins, vaccines, minerals, or lifestyle changes. 

Diagnostic Research refers to the practice of looking for better ways to identify a particular disorder or condition. 

Screening Research aims to find the best ways to detect certain disorders or health conditions. 

Quality of Life Research explores ways to improve comfort and the quality of life for individuals with a chronic illness. 

Genetic studies aim to improve the prediction of disorders by identifying and understanding how genes and illnesses may be related. Research in this area may explore ways in which a person’s genes make him or her more or less likely to develop a disorder. This may lead to development of tailor-made treatments based on a patient’s genetic make-up. 

Epidemiological studies seek to identify the patterns, causes, and control of disorders in groups of people. 

An important note: some clinical research is “outpatient,” meaning that participants do not stay overnight at the hospital. Some is “inpatient,” meaning that participants will need to stay for at least one night in the hospital or research center. Be sure to ask the researchers what their study requires. 

Phases of clinical trials: when clinical research is used to evaluate medications and devices Clinical trials are a kind of clinical research designed to evaluate and test new interventions such as psychotherapy or medications. Clinical trials are often conducted in four phases. The trials at each phase have a different purpose and help scientists answer different questions. 

Phase I trials Researchers test an experimental drug or treatment in a small group of people for the first time. The researchers evaluate the treatment’s safety, determine a safe dosage range, and identify side effects. 

Phase II trials The experimental drug or treatment is given to a larger group of people to see if it is effective and to further evaluate its safety.

Phase III trials The experimental study drug or treatment is given to large groups of people. Researchers confirm its effectiveness, monitor side effects, compare it to commonly used treatments, and collect information that will allow the experimental drug or treatment to be used safely. 

Phase IV trials Post-marketing studies, which are conducted after a treatment is approved for use by the FDA, provide additional information including the treatment or drug’s risks, benefits, and best use.

Examples of other kinds of clinical research Many people believe that all clinical research involves testing of new medications or devices. This is not true, however. Some studies do not involve testing medications and a person’s regular medications may not need to be changed. Healthy volunteers are also needed so that researchers can compare their results to results of people with the illness being studied. Some examples of other kinds of research include the following: 

A long-term study that involves psychological tests or brain scans

A genetic study that involves blood tests but no changes in medication

A study of family history that involves talking to family members to learn about people’s medical needs and history.

Clinical research methods for treatment, diagnosis, prognosis, etiology, screening, and prevention: A narrative review

Affiliations.

  • 1 Department of Oncology, McMaster University, Hamilton, Ontario, Canada.
  • 2 Center for Clinical Practice Guideline Conduction and Evaluation, Children's Hospital of Fudan University, Shanghai, P.R. China.
  • 3 Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada.
  • 4 Department of Pediatrics, University of Antioquia, Colombia.
  • 5 Editorial Office, Chinese Journal of Evidence-Based Pediatrics, Children's Hospital of Fudan University, Shanghai, P.R. China.
  • 6 Division of Thoracic Surgery, Xuanwu Hospital, Capital Medical University, Beijing, P.R. China.
  • 7 Division of Neuropsychiatry and Behavioral Neurology and Clinical Psychology, Beijing Tiantan Hospital, Capital Medical University, Beijing, P.R. China.
  • 8 Division of Respirology, Tongren Hospital, Capital Medical University, Beijing, P.R. China.
  • 9 Division of Respirology, Xuanwu Hospital, Capital Medical University, Beijing, P.R. China.
  • 10 Division of Orthopedic Surgery, Juravinski Cancer Centre, McMaster University, Hamilton, Ontario, Canada.
  • PMID: 32445266
  • DOI: 10.1111/jebm.12384

This narrative review is an introduction for health professionals on how to conduct and report clinical research on six categories: treatment, diagnosis/differential diagnosis, prognosis, etiology, screening, and prevention. The importance of beginning with an appropriate clinical question and the exploration of how appropriate it is through a literature search are explained. There are three methodological directives that can assist clinicians in conducting their studies from a methodological perspective: (1) how to conduct an original study or a systematic review, (2) how to report an original study or a systematic review, and (3) how to assess the quality or risk of bias for a previous relevant original study or systematic review. This methodological overview article would provide readers with the key points and resources regarding how to perform high-quality research on the six main clinical categories.

Keywords: clinical research methods; diagnosis; literature search; prognosis; treatment.

© 2020 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

Publication types

  • Biomedical Research / methods*
  • Biomedical Research / standards
  • Mass Screening
  • Preventive Medicine / methods*
  • Systematic Reviews as Topic
  • Therapeutics / methods*

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 07 August 2023

Digital and precision clinical trials: innovations for testing mental health medications, devices, and psychosocial treatments

  • Eric Lenze 1 ,
  • John Torous   ORCID: orcid.org/0000-0002-5362-7937 2   na1 &
  • Patricia Arean 3   na1  

Neuropsychopharmacology volume  49 ,  pages 205–214 ( 2024 ) Cite this article

890 Accesses

1 Citations

36 Altmetric

Metrics details

  • Drug development
  • Outcomes research

A Correction to this article was published on 02 October 2023

This article has been updated

Mental health treatment advances - including neuropsychiatric medications and devices, psychotherapies, and cognitive treatments - lag behind other fields of clinical medicine such as cardiovascular care. One reason for this gap is the traditional techniques used in mental health clinical trials, which slow the pace of progress, produce inequities in care, and undermine precision medicine goals. Newer techniques and methodologies, which we term digital and precision trials, offer solutions. These techniques consist of (1) decentralized (i.e., fully-remote) trials which improve the speed and quality of clinical trials and increase equity of access to research, (2) precision measurement which improves success rate and is essential for precision medicine, and (3) digital interventions, which offer increased reach of, and equity of access to, evidence-based treatments. These techniques and their rationales are described in detail, along with challenges and solutions for their utilization. We conclude with a vignette of a depression clinical trial using these techniques.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 13 print issues and online access

251,40 € per year

only 19,34 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

techniques used in clinical research

Change history

02 october 2023.

A Correction to this paper has been published: https://doi.org/10.1038/s41386-023-01746-6

Stein DJ, Shoptaw SJ, Vigo DV, Lund C, Cuijpers P, Bantjes J, et al. Psychiatric diagnosis and treatment in the 21st century: paradigm shifts versus incremental integration. World Psychiatry. 2022;21:393–414. https://doi.org/10.1002/wps.20998 .

Article   PubMed   PubMed Central   Google Scholar  

Rush AJ, Sackeim HA, Conway CR, Bunker MT, Steven DH, Koen D, et al. Clinical research challenges posed by difficult-to-treat depression. Psychol Med. 2022;52:419–32. https://doi.org/10.1017/S0033291721004943 .

Harvey PD, Strassnig MT. Cognition and disability in schizophrenia: cognition-related skills deficits and decision-making challenges add to morbidity. World Psychiatry. 2019;18:165–7. https://doi.org/10.1002/wps.20647 .

Ringel MS, Scannell JW, Baedeker M, Schulze U. Breaking Eroom’s Law. Nat Rev Drug Discov. 2020;19:833–4. https://doi.org/10.1038/d41573-020-00059-3 .

Article   CAS   PubMed   Google Scholar  

Manchia M, Pisanu C, Squassina A, Carpiniello B. Challenges and Future Prospects of Precision Medicine in Psychiatry. Pharmgenomics Pers Med. 2020;13:127–40. https://doi.org/10.2147/PGPM.S198225 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

ALLHAT Officers and Coordinators for the ALLHAT Collaborative Research Group. Major outcomes in high-risk hypertensive patients randomized to angiotensin-converting enzyme inhibitor or calcium channel blocker vs diuretic: The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT). JAMA. 2002;288:2981–97. https://doi.org/10.1001/jama.288.23.2981 .

Article   Google Scholar  

Gaynes BN, Warden D, Trivedi MH, Wisniewski SR, Fava M, Rush AJ. What did STAR*D teach us? Results from a large-scale, practical, clinical trial for patients with depression. Psychiatr Serv. 2009;60:1439–45. https://doi.org/10.1176/ps.2009.60.11.1439 .

Article   PubMed   Google Scholar  

Freedland KE. Progress in health-related behavioral intervention research: Making it, measuring it, and meaning it. Health Psychol. 2022;41:1–12. https://doi.org/10.1037/hea0001160 .

Flyvbjerg B. Make Megaprojects More Modular. Harvard Business Rev. 2021; 58–63. Available at SSRN: https://ssrn.com/abstract=39374652021 .

Mofsen AM, Rodebaugh TL, Nicol GE, Depp CA, Miller JP, Lenze EJ. When All Else Fails, Listen to the Patient: A Viewpoint on the Use of Ecological Momentary Assessment in Clinical Trials. JMIR Ment Health. 2019;6:e11845. https://doi.org/10.2196/11845 .

Trajković G, Starčević V, Latas M, Miomir L, Tanja I, Zoran B, et al. Reliability of the Hamilton Rating Scale for Depression: A meta-analysis over a period of 49 years. Psychiatry Res. 2011;189:1–9. https://doi.org/10.1016/j.psychres.2010.12.007 .

Enkavi AZ, Eisenberg IW, Bissett PG, Mazza GL, MacKinnon DP, Marsch LA, et al. Large-scale analysis of test-retest reliabilities of self-regulation measures. Proc Natl Acad Sci USA 2019;116:5472–7. https://doi.org/10.1073/pnas.1818430116 .

Herting MM, Gautam P, Chen Z, Mezher A, Vetter NC. Test-retest reliability of longitudinal task-based fMRI: Implications for developmental studies. Dev Cogn Neurosci. 2018;33:17–26. https://doi.org/10.1016/j.dcn.2017.07.001 .

Holiga S, Sambataro F, Luzy C, Greig G, Sarkar N, Remco RJ, et al. Test-retest reliability of task-based and resting-state blood oxygen level dependence and cerebral blood flow measures. PLoS One. 2018;13:e0206583. https://doi.org/10.1371/journal.pone.0206583 .

Rodebaugh TL, Scullin RB, Langer JK, Dixon DJ, Huppert JD, Bernstein A, et al. Unreliability as a threat to understanding psychopathology: The cautionary tale of attentional bias. J Abnorm Psychol. 2016;125:840–51. https://doi.org/10.1037/abn0000184 .

Lyon AR, Brewer SK, Arean PA. Leveraging human-centered design to implement modern psychological science: Return on an early investment. Am Psychol. 2020;75:1067–79. https://doi.org/10.1037/amp0000652 .

Lyon AR, Munson SA, Renn BN, Atkins DC, Pullmann MD, Emily F, et al. Use of Human-Centered Design to Improve Implementation of Evidence-Based Psychotherapies in Low-Resource Communities: Protocol for Studies Applying a Framework to Assess Usability. JMIR Res Protoc. 2019;8:e14990. https://doi.org/10.2196/14990 .

Munson SA, Friedman EC, Osterhage K, Allred R, Pullmann MD, Arean PA, et al. Usability Issues in Evidence-Based Psychosocial Interventions and Implementation Strategies: Cross-project Analysis. J Med Internet Res. 2022;24:e37585. https://doi.org/10.2196/37585 .

Forjuoh SN, Helduser JW, Bolin JN, Ory MG. Challenges Associated with Multi-institutional Multi-site Clinical Trial Collaborations: Lessons from a Diabetes Self-Management Interventions Study in Primary Care. J Clin Trials. 2015;5:219. https://oaktrust.library.tamu.edu/handle/1969.1/154772 .

Greer TL, Walker R, Rethorst CD, Northup TF, Diane W, Horigian VE, et al. Identifying and responding to trial implementation challenges during multisite clinical trials. J Subst Abus Treat. 2020;112:63–72. https://doi.org/10.1016/j.jsat.2020.02.004 .

Kraemer HC. Pitfalls of multisite randomized clinical trials of efficacy and effectiveness. Schizophr Bull. 2000;26:533–41. https://doi.org/10.1093/oxfordjournals.schbul.a033474 .

National Academies of Sciences Engineering, and Medicine. Virtual Clinical Trials: Challenges and Opportunities: A Workshop. 2019. https://www.nationalacademies.org/our-work/virtual-clinical-trials-challenges-and-opportunities-a-workshop .

Anguera JA, Jordan JT, Castaneda D, Gazzaley A, Arean PA. Conducting a fully mobile and randomised clinical trial for depression: access, engagement and expense. BMJ Innov. 2016;2:14–21. https://doi.org/10.1136/bmjinnov-2015-000098 .

Ahern KB, Lenze EJ. Mental Health Clinical Research Innovations during the COVID-19 Pandemic: The Future Is Now. Psychiatr Clin North Am. 2022;45:179–89. https://doi.org/10.1016/j.psc.2021.11.011 .

Lenze EJ, Mattar C, Zorumski CF, Stevens A, Schweiger J, Nicol GE, et al. Fluvoxamine vs Placebo and Clinical Deterioration in Outpatients With Symptomatic COVID-19: A Randomized Clinical Trial. JAMA. 2020;324:2292–300. https://doi.org/10.1001/jama.2020.22760 .

Naggie S, Boulware DR, Lindsell CJ, Stewart TG, Slandzicki AJ, Lim SC, et al. Effect of Higher-Dose Ivermectin for 6 Days vs Placebo on Time to Sustained Recovery in Outpatients With COVID-19: A Randomized Clinical Trial. JAMA. 2023; https://doi.org/10.1001/jama.2023.1650 .

Bramante CT, Beckman KB, Mehta T, Karger AB, Odde DJ, Tignanelli CJ, et al. Metformin reduces SARS-CoV-2 in a Phase 3 Randomized Placebo Controlled Clinical Trial. medRxiv. 2023:2023–06. https://doi.org/10.1101/2023.06.06.23290989 .

Boulware DR, Pullen MF, Bangdiwala AS, Pastick KA, Lofgren SM, Okafor EC, et al. A Randomized Trial of Hydroxychloroquine as Postexposure Prophylaxis for Covid-19. N Engl J Med. 2020;383:517–25. https://doi.org/10.1056/NEJMoa2016638 .

Comtois KA, Mata-Greve F, Johnson M, Pullmann MD, Mosser B, Arean P. Effectiveness of Mental Health Apps for Distress During COVID-19 in US Unemployed and Essential Workers: Remote Pragmatic Randomized Clinical Trial. JMIR Mhealth Uhealth. 2022;10:e41689. https://doi.org/10.2196/41689 .

Arean PA, Hallgren KA, Jordan JT, Gazzaley A, Atkins DC, Heagerty PJ, et al. The Use and Effectiveness of Mobile Apps for Depression: Results From a Fully Remote Clinical Trial. J Med Internet Res. 2016;18:e330. https://doi.org/10.2196/jmir.6482 .

Pratap A, Homiar A, Waninger L, Herd C, Suver C, Volponi J, et al. Real-world behavioral dataset from two fully remote smartphone-based randomized clinical trials for depression. Sci Data. 2022;9:522. https://doi.org/10.1038/s41597-022-01633-7 .

Pratap A, Neto EC, Snyder P, Stepnowsky C, Elhadad N, Grant D, et al. Indicators of retention in remote digital health studies: a cross-study evaluation of 100,000 participants. NPJ Digit Med. 2020;3:21. https://doi.org/10.1038/s41746-020-0224-8 .

Pratap A, Renn BN, Volponi J, Mooney SD, Gazzaley A, Arean PA, et al. Using Mobile Apps to Assess and Treat Depression in Hispanic and Latino Populations: Fully Remote Randomized Clinical Trial. J Med Internet Res. 2018;20:e10130. https://doi.org/10.2196/10130 .

Ainsworth NJ, Wright H, Tereshchenko K, Blumberger DM, Flint AJ, Lenze EJ, et al. Recruiting for a Randomized Clinical Trial for Late-Life Depression During COVID-19: Outcomes of Provider Referrals Versus Facebook Self-Referrals. Am J Geriatr Psychiatry. 2023; https://doi.org/10.1016/j.jagp.2023.01.021 .

Askin S, Burkhalter D, Calado G, El Dakrouni S. Artificial Intelligence Applied to clinical trials: opportunities and challenges. Health Technol. 2023;13:203–13. https://doi.org/10.1007/s12553-023-00738-2 .

Miller MI, Shih LC, Kolachalama VB. Machine Learning in Clinical Trials: A Primer with Applications to Neurology. Neurotherapeutics. 2023:1–15. https://doi.org/10.1007/s13311-023-01384-2 .

Hardman TC, Aitchison R, Scaife R, Edwards J, Slater G. The future of clinical trials and drug development: 2050. Drugs Context. 2023;12. https://doi.org/10.7573/dic.2023-2-2 .

O’Donnell N, Satherley R, Davey E, Bryan G. Fraudulent participants in qualitative child health research: identifying and reducing bot activity. BMJ 2023;108:415.

Google Scholar  

Teitcher JE, Bockting WO, Bauermeister JA, Hoefer CJ, Miner MH, Klitzman RL. Detecting, preventing, and responding to "fraudsters" in internet research: ethics and tradeoffs. J Law Med Ethics. Spring. 2015;43:116–33. https://doi.org/10.1111/jlme.12200 .

Storozuk A, Ashley M, Delage V, Maloney EA. Got bots? Practical recommendations to protect online survey data from bot attacks. Quant Methods Psychol. 2020;16:472–81.

Levi R, Ridberg R, Akers M, Seligman H. Survey Fraud and the Integrity of Web-Based Survey Research. Am J Health Promot. 2022;36:18–20. https://doi.org/10.1177/08901171211037531 .

Campbell CK, Ndukwe S, Dube K, Sauceda JA, Saberi P. Overcoming Challenges of Online Research: Measures to Ensure Enrollment of Eligible Participants. J Acquir Immune Defic Syndr. 2022;91:232–6. https://doi.org/10.1097/QAI.0000000000003035 .

Quagan B, Woods SW, Powers AR. Navigating the Benefits and Pitfalls of Online Psychiatric Data Collection. JAMA Psychiatry. 2021;78:1185–6. https://doi.org/10.1001/jamapsychiatry.2021.2315 .

Leiner DJ. Too Fast, too Straight, too Weird: Non-Reactive Indicators for Meaningless Data in Internet Surveys. Surv Res Methods. 2019;13:229–48.

Salinas MR. Are Your Participants Real? Dealing with Fraud in Recruiting Older Adults Online. West J Nurs Res. 2023;45:93–99. https://doi.org/10.1177/01939459221098468 .

Griffith Fillipo IR, Pullmann MD, Hull TD, James Z, Jerilyn W, Boris L, et al. Participant retention in a fully remote trial of digital psychotherapy: Comparison of incentive types. Front Digit Health. 2022;4:963741. https://doi.org/10.3389/fdgth.2022.963741 .

Nickels S, Edwards MD, Poole SF, Winter D, Gronsbell J, Bella R, et al. Toward a Mobile Platform for Real-world Digital Measurement of Depression: User-Centered Design, Data Quality, and Behavioral and Clinical Modeling. JMIR Ment Health. 2021;8:e27589. https://doi.org/10.2196/27589 .

Scheuer L, Torous J. Usable Data Visualization for Digital Biomarkers: An Analysis of Usability, Data Sharing, and Clinician Contact. Digit Biomark. 2022;6:98–106. https://doi.org/10.1159/000525888 .

Clay I, Peerenboom N, Connors DE, Bourke S, Keogh A, Wac K, et al. Reverse Engineering of Digital Measures: Inviting Patients to the Conversation. Digit Biomark. 2023;7:28–44. https://doi.org/10.1159/000530413 .

Dworkin RH, Turk DC, Peirce-Sandner S, Burke LB, Farrar JT, Gilron I, et al. Considerations for improving assay sensitivity in chronic pain clinical trials: IMMPACT recommendations. Pain. 2012;153:1148–58. https://doi.org/10.1016/j.pain.2012.03.003 .

Kobak KA, Kane JM, Thase ME, Nierenberg AA. Why Do Clinical Trials Fail?: The Problem of Measurement Error in Clinical Trials: Time to Test New Paradigms? J Clin Psychopharmacol. 2007;27:1–5.

Khan A, Mar KF, Brown WA. The conundrum of depression clinical trials: one size does not fit all. Int Clin Psychopharmacol. 2018;33:239–48. https://doi.org/10.1097/YIC.0000000000000229 .

Rutherford BR, Roose SP. A model of placebo response in antidepressant clinical trials. Am J Psychiatry. 2013;170:723–33. https://doi.org/10.1176/appi.ajp.2012.12040474 .

Lenze EJ, Nicol GE, Barbour DL, Kannampallil T, Wong AWK, Piccirillo J, et al. Precision clinical trials: a framework for getting to precision medicine for neurobehavioural disorders. J Psychiatry Neurosci. 2021;46:E97–E110. https://doi.org/10.1503/jpn.200042 .

Schiele MA, Zwanzger P, Schwarte K, Arolt V, Baune BT, Domschke K. Serotonin Transporter Gene Promoter Hypomethylation as a Predictor of Antidepressant Treatment Response in Major Depression: A Replication Study. Int J Neuropsychopharmacol. 2020;24:191–9. https://doi.org/10.1093/ijnp/pyaa081 .

Article   CAS   PubMed Central   Google Scholar  

Stein K, Maruf AAL, Müller DJ, Bishop JR, Bousman CA. Serotonin Transporter Genetic Variation and Antidepressant Response and Tolerability: A Systematic Review and Meta-Analysis. J Personalized Med. 2021;11:1334.

Klasnja P, Hekler EB, Shiffman S, Boruvka A, Almirall D, Tewari A, et al. Microrandomized trials: An experimental design for developing just-in-time adaptive interventions. Health Psychol. 2015;34S:1220–8. https://doi.org/10.1037/hea0000305 .

Almirall D, Nahum-Shani I, Sherwood NE, Murphy SA. Introduction to SMART designs for the development of adaptive interventions: with application to weight loss research. Transl Behav Med. 2014;4:260–74. https://doi.org/10.1007/s13142-014-0265-0 .

Hoyle RH, Robinson JC. Mediated and Moderated Effects in Social Psychological Research: Measurement, Design, and analysis Issues. In: Sansone C, Morf CC, Panter AT, eds. The Sage Handbook of Methods in Social Psychology . SAGE; 2004:chap 10.

Michon KJ, Khammash D, Simmonite M, Hamlin AM, Polk TA. Person-specific and precision neuroimaging: Current methods and future directions. NeuroImage. 2022;263:119589. https://doi.org/10.1016/j.neuroimage.2022.119589 .

Shiffman S, Stone AA, Hufford MR. Ecological momentary assessment. Annu Rev Clin Psychol. 2008;4:1–32.

Zulueta J, Piscitello A, Rasic M, Easter R, Babu P, Langenecker SA, et al. Predicting Mood Disturbance Severity with Mobile Phone Keystroke Metadata: A BiAffect Digital Phenotyping Study. J Med Internet Res. 2018;20:e241. https://doi.org/10.2196/jmir.9775 .

Torous J, Kiang MV, Lorme J, Onnela JP. New Tools for New Research in Psychiatry: A Scalable and Customizable Platform to Empower Data Driven Smartphone Research. JMIR Ment Health. 2016;3:e16. https://doi.org/10.2196/mental.5165 .

Ebner-Priemer UW, Trull TJ. Ecological momentary assessment of mood disorders and mood dysregulation. Psychol Assess. 2009;21:463–75. https://doi.org/10.1037/a0017075 .

Moore RC, Ackerman RA, Russell MT, Campbell LM, Depp CA, Harvey PD, et al. Feasibility and validity of ecological momentary cognitive testing among older adults with mild cognitive impairment. Front Digit Health. 2022;4:946685. https://doi.org/10.3389/fdgth.2022.946685 .

Moore RC, Depp CA, Wetherell JL, Lenze EJ. Ecological momentary assessment versus standard assessment instruments for measuring mindfulness, depressed mood, and anxiety among older adults. J Psychiatr Res. 2016;75:116–23. https://doi.org/10.1016/j.jpsychires.2016.01.011 .

Nicosia J, Aschenbrenner AJ, Balota DA, Sliwinski MJ, Marisol T, Adams S, et al. Unsupervised high-frequency smartphone-based cognitive assessments are reliable, valid, and feasible in older adults at risk for Alzheimer’s disease. J Int Neuropsychol Soc. 2022:1–13. https://doi.org/10.1017/S135561772200042X .

Moore RC, Swendsen J, Depp CA. Applications for self-administered mobile cognitive assessments in clinical research: A systematic review. Int J Methods Psychiatr Res. 2017;26. https://doi.org/10.1002/mpr.1562 .

Alva S, Brazg R, Castorino K, Kipnes M, Liljenquist DR, Liu H. Accuracy of the Third Generation of a 14-Day Continuous Glucose Monitoring System. Diabetes Ther. 2023; https://doi.org/10.1007/s13300-023-01385-6 .

Badal VD, Parrish EM, Holden JL, Depp CA, Granholm E. Dynamic contextual influences on social motivation and behavior in schizophrenia: a case-control network analysis. NPJ Schizophr. 2021;7:62. https://doi.org/10.1038/s41537-021-00189-6 .

Bagby RM, Ryder AG, Schuller DR, Marshall MB. The Hamilton Depression Rating Scale: has the gold standard become a lead weight? Am J Psychiatry. 2004;161:2163–77.

Brady LS, Larrauri CA, Committee ASS. Accelerating Medicines Partnership® Schizophrenia (AMP®SCZ): developing tools to enable early intervention in the psychosis high risk state. World Psychiatry. 2023;22:42–3. https://doi.org/10.1002/wps.21038 .

Coats AJ, Radaelli A, Clark SJ, Conway J, Sleight P. The influence of ambulatory blood pressure monitoring on the design and interpretation of trials in hypertension. J Hypertens. 1992;10:385–91. https://doi.org/10.1097/00004872-199204000-00011 .

Oreel TH, Delespaul P, Hartog ID, Henriques JPS, Netjes JE, Vonk ABA, et al. Ecological momentary assessment versus retrospective assessment for measuring change in health-related quality of life following cardiac intervention. J Patient-Rep. Outcomes. 2020;4:98. https://doi.org/10.1186/s41687-020-00261-2 .

Andrewes HE, Hulbert C, Cotton SM, Betts J, Chanen AM. An ecological momentary assessment investigation of complex and conflicting emotions in youth with borderline personality disorder. Psychiatry Res. 2017;252:102–10. https://doi.org/10.1016/j.psychres.2017.01.100 .

Pecina M, Chen J, Karp JF, Dombrovski AY. Dynamic Feedback Between Antidepressant Placebo Expectancies and Mood. JAMA Psychiatry. 2023; https://doi.org/10.1001/jamapsychiatry.2023.0010 .

Yaden DB, Potash JB, Griffiths RR. Preparing for the Bursting of the Psychedelic Hype Bubble. JAMA Psychiatry. 2022;79:943–4. https://doi.org/10.1001/jamapsychiatry.2022.2546 .

Trivedi M, Carpenter L, Thase M. Clinical Outcome Assessments (COA) Qualification Program DDT COA #000008: Symptoms of Major Depressive Disorder Scale (SMDDS) Full Qualification Package. 2018. https://www.fda.gov/drugs/clinical-outcome-assessment-coa-qualification-program/ddt-coa-000008-symptoms-major-depressive-disorder-scale-smdds .

White House Report on Mental Health Research Priorities (2023). https://www.whitehouse.gov/ostp/news-updates/2023/02/07/white-house-report-on-mental-health-research-priorities/ .

Mental Health By the Numbers. NAMI. 2023. https://www.nami.org/mhstats .

Behavioral Health Workforce Projections. HRSA Health Workforce https://bhw.hrsa.gov/data-research/projecting-health-workforce-supply-demand/behavioral-health .

Goldberg SB, Lam SU, Simonsson O, Torous J, Sun S. Mobile phone-based interventions for mental health: A systematic meta-review of 14 meta-analyses of randomized controlled trials. PLOS Digit Health. 2022;1. https://doi.org/10.1371/journal.pdig.0000002 .

Freedland KE, Mohr DC, Davidson KW, Schwartz JE. Usual and unusual care: existing practice control groups in randomized controlled trials of behavioral interventions. Rev Psychosom Med.2011;73:323–35. https://doi.org/10.1097/PSY.0b013e318218e1fb .

Lyles CR, Wachter RM, Sarkar U. Focusing on Digital Health Equity. JAMA. 2021;326:1795–6. https://doi.org/10.1001/jama.2021.18459 .

Mishori R, Antono B. Telehealth, Rural America, and the Digital Divide. J Ambulatory Care Manag. 2020;43:319–22.

Sosa Diaz MJ. Emergency Remote Education, Family Support and the Digital Divide in the Context of the COVID-19 Lockdown. Int J Environ Res Public Health. 2021;18. https://doi.org/10.3390/ijerph18157956 .

Killsback LK. A nation of families: traditional indigenous kinship, the foundation for Cheyenne sovereignty. AlterNative: Int J Indigenous Peoples. 2019;15:34–43. https://doi.org/10.1177/1177180118822833 .

ADA Archive, Department of Justice Civil Rights Division. 2023. https://archive.ada.gov/access-technology/index.html .

How to Check for App Accessibility? Perkins School for the Blind. 2023. https://www.perkins.org/resource/how-check-app-accessibility/ .

Martinez-Alcala CI, Rosales-Lagarde A, Perez-Perez Y, Lopez-Noguerola JS, Bautista-Diaz M, Agis-Juarez RA. The Effects of Covid-19 on the Digital Literacy of the Elderly: Norms for Digital Inclusion. Front Educ. 2021;6:1–19.

Grossman JT, Frumkin MR, Rodebaugh TL, Lenze EJ. mHealth Assessment and Intervention of Depression and Anxiety in Older Adults. Harv Rev Psychiatry. 2020;28:203–14. https://doi.org/10.1097/HRP.0000000000000255 .

Bach AJ, Wolfson T, Crowell JK. Poverty, Literacy, and Social Transformation: An Interdisciplinary Exploration of the Digital Divide. J Media Lit Educ. 2018;10:22–41.

Lee J, Lee EH, Chae D. eHealth Literacy Instruments: Systematic Review of Measurement Properties. J Med Internet Res. 2021;23:e30644. https://doi.org/10.2196/30644 .

Oh SS, Kim KA, Kim M, Oh J, Chu SH, Choi J. Measurement of Digital Literacy Among Older Adults: Systematic Review. J Med Internet Res. 2021;23:e26145. https://doi.org/10.2196/26145 .

Yoon J, Lee M, Ahn JS, Oh D, Shin S-Y, Chang YJ, et al. Development and Validation of Digital Health Technology Literacy Assessment Questionnaire. J Med Syst. 2022;46:13. https://doi.org/10.1007/s10916-022-01800-8 .

Rivadeneira MF, Miranda-Velasco MJ, Arroyo HV, Caicedo-Gallardo JD, Salvador-Pinos C. Digital Health Literacy Related to COVID-19: Validation and Implementation of a Questionnaire in Hispanic University Students. Int J Environ Res Public Health. 2022;19. https://doi.org/10.3390/ijerph19074092 .

U.S. Food & Drug Administration. Digital Health Technologies for Drug Development: Demonstration Projects. 2023. https://www.fda.gov/science-research/science-and-research-special-topics/digital-health-technologies-drug-development-demonstration-projects .

U.S. Food & Drug Administration. The Software Precertification (Pre-Cert) Pilot Program: Tailored Total Product Lifecycle Approaches and Key Findings. 2022. https://www.fda.gov/media/161815/download .

Zarate D, Stavropoulos V, Ball M, de Sena Collier G, Jacobson NC. Exploring the digital footprint of depression: a PRISMA systematic literature review of the empirical evidence. BMC Psychiatry. 2022;22:421. https://doi.org/10.1186/s12888-022-04013-y .

Ortiz A, Maslej MM, Husain MI, Daskalakis ZJ, Mulsant BH. Apps and gaps in bipolar disorder: A systematic review on electronic monitoring for episode prediction. J Affect Disord. 2021;295:1190–200. https://doi.org/10.1016/j.jad.2021.08.140 .

Benoit J, Onyeaka H, Keshavan M, Torous J. Systematic Review of Digital Phenotyping and Machine Learning in Psychosis Spectrum Illnesses. Harv Rev Psychiatry. 2020;28:296–304. https://doi.org/10.1097/HRP.0000000000000268 .

Matcham F, Leightley D, Siddi S, Lamers F, White KM, Annas P, et al. Remote Assessment of Disease and Relapse in Major Depressive Disorder (RADAR-MDD): recruitment, retention, and data availability in a longitudinal remote measurement study. BMC Psychiatry. 2022;22:136. https://doi.org/10.1186/s12888-022-03753-1 .

Currey D, Torous J. Increasing the Value of Digital Phenotyping Through Reducing Missingness: A Retrospective Analysis. medRxiv. 2022. https://doi.org/10.1101/2022.05.17.22275182 .

Torous LS. Usable Data Visualization for Digital Biomarkers: An Analysis of Usability, Data Sharing, and Clinician Contact .

Ghafur S, Van Dael J, Leis M, Darzi A, Sheikh A. Public perceptions on data sharing: key insights from the UK and the USA. Lancet Digit Health. 2020;2:e444–6. https://doi.org/10.1016/S2589-7500(20)30161-8 .

Huberty J. Real Life Experiences as Head of Science. JMIR Ment Health. 2023;10:e43820. https://doi.org/10.2196/43820 .

Kwon S, Firth J, Joshi D, Torous J. Accessibility and availability of smartphone apps for schizophrenia. Schizophrenia (Heidelb). 2022;8:98. https://doi.org/10.1038/s41537-022-00313-0 .

Download references

Author information

These authors contributed equally: John Torous, Patricia Arean.

Authors and Affiliations

Departments of Psychiatry and Anesthesiology, Washington University School of Medicine, St Louis, MO, USA

Department of Psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA

John Torous

Department of Psychiatry and Behavioral Sciences, University of Washington, Seattle, WA, USA

Patricia Arean

You can also search for this author in PubMed   Google Scholar

Contributions

EL, JT, and PA all participated in developing the concept for the manuscript, writing the manuscript, and critically reviewing and editing it.

Corresponding author

Correspondence to Eric Lenze .

Ethics declarations

Competing interests.

EL: Consultant for Prodeo, Pritikin ICR, IngenioRx, Boehringer-Ingelheim, and Merck. Research funding from Janssen. Patent application pending for sigma-1 receptor agonists for COVID-19. JT: scientific advisory board of Precision Mental Wellness. PA: scientific advisory board of Headspace Health, Koa Health, and Chorus Sleep.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised: in figure 1, the text boxes on the left inadvertently repeated the same information. The figure has now been replaced with an updated version.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Lenze, E., Torous, J. & Arean, P. Digital and precision clinical trials: innovations for testing mental health medications, devices, and psychosocial treatments. Neuropsychopharmacol. 49 , 205–214 (2024). https://doi.org/10.1038/s41386-023-01664-7

Download citation

Received : 04 April 2023

Revised : 05 July 2023

Accepted : 10 July 2023

Published : 07 August 2023

Issue Date : January 2024

DOI : https://doi.org/10.1038/s41386-023-01664-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

techniques used in clinical research

Research Methods In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Print Friendly, PDF & Email

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

A meta-analysis is a systematic review that involves identifying an aim and then searching for research studies that have addressed similar aims/hypotheses.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

Strengths: Increases the conclusions’ validity as they’re based on a wider range.

Weaknesses: Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

techniques used in clinical research

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Topic collections
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 13, Issue 2
  • Qualitative Research Methods in Mental Health
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Sarah Peters
  • Correspondence to : Dr Sarah Peters, School of Psychological Sciences, The University of Manchester, Coupland Building 1, Oxford Road M13 9PL, UK; sarah.peters{at}manchester.ac.uk

As the evidence base for the study of mental health problems develops, there is a need for increasingly rigorous and systematic research methodologies. Complex questions require complex methodological approaches. Recognising this, the MRC guidelines for developing and testing complex interventions place qualitative methods as integral to each stage of intervention development and implementation. However, mental health research has lagged behind many other healthcare specialities in using qualitative methods within its evidence base. Rigour in qualitative research raises many similar issues to quantitative research and also some additional challenges. This article examines the role of qualitative methods within mental heath research, describes key methodological and analytical approaches and offers guidance on how to differentiate between poor and good quality qualitative research.

https://doi.org/10.1136/ebmh.13.2.35

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The trajectory of qualitative methods in mental health research

Qualitative methodologies have a clear home within the study of mental health research. Early and, arguably, seminal work into the study of mental illnesses and their management was based on detailed observation, moving towards theory using inductive reasoning. Case studies have been long established in psychiatry to present detailed analysis of unusual cases or novel treatments. Participant observation was the principle method used in Goffman's seminal study of psychiatric patients in asylums that informed his ideas about the institutionalising and medicalising of mental illness by medical practice. 1 However, the 20th century saw the ‘behaviourist revolution’, a movement where quantification and experimentation dominated. Researchers sought to identify cause and effects, and reasoning became more deductive – seeking to use data to confirm theory. The study of health and illness was determined by contemporary thinking about disease, taking a biomedical stance. Psychologists and clinical health researchers exploited natural science methodologies, attempting to measure phenomenon in their smallest entities and do so as objectively as possible. This reductionist and positivist philosophy shaped advances in research methods and meant that qualitative exploration failed to develop as a credible scientific approach. Indeed, ‘objectivity’ and the ‘discovery of truth’ have become synonymous with ‘scientific enquiry’ and qualitative methods are easily dismissed as ‘anecdotal’. The underlying epistemology of this approach chimes well with medical practice for which training is predominately in laboratory and basic sciences (such as physics and chemistry) within which the discourse of natural laws dominate. To this end, research in psychiatry still remains overwhelmingly quantitative. 2

Underlying all research paradigms are assumptions. However, most traditional researchers remain unaware of these until they start to use alternative paradigms. Key assumptions of quantitative research are that facts exist that can be quantified and measured and that these should be examined, as far as possible, objectively, partialling out or controlling for the context within which they exist. There are research questions within mental health where this approach can hold: where phenomenon of interest can be reliably and meaningfully quantified and measured, it is feasible to use data to test predictions and examine change. However, for many questions these assumptions prove unsatisfying. It is often not possible or desirable to try and create laboratory conditions for the research; indeed it would be ecologically invalid to do so. For example, to understand the experience of an individual who has been newly diagnosed with schizophrenia, it is clearly important to consider the context within which they live, their family, social grouping and media messages they are exposed to. Table 1 depicts the key differences between the two methodological approaches and core underlying assumptions for each.

  • View inline

Comparison of underlying assumptions of quantitative and qualitative research approaches

It should be cautioned that it is easy to fall into the trap of categorising studies as either quantitative or qualitative. The two traditions are often positioned within the literature as opposing and in conflict. This division is unhelpful and likely to impede methodological advancement. Though, undeniably, there are differences in the two approaches to research, there are also many exceptions that expose this dichotomy to be simplistic: some qualitative studies seek to test a priori hypotheses, and some quantitative studies are atheoretical and exploratory. 3 Hence it is more useful to consider research methodologies as lying along a spectrum and that researchers should be familiar with the full range of methodologies, so that a method is chosen according to the research question rather than the researcher's ability.

Rationale for qualitative methods in current mental health research

There are a number of scientific, practical and ethical reasons why mental health is an area that can particularly benefit from qualitative enquiry. Mental health research is complex. Health problems are multifactorial in their aetiology and the consequences they have on the individual, families and societies. Management can involve self-help, pharmacological, educative, social and psychotherapeutic approaches. Services involved are often multidisciplinary and require liaison between a number of individuals including professionals, service-users and relatives. Many problems are exacerbated by poor treatment compliance and lack of access to, or engagement with, appropriate services. 4

Engagement with mental health research can also be challenging. Topics may be highly sensitive or private. Individuals may have impaired capacity or be at high risk. During the research process there may be revelations of suicidal ideation or criminal activity. Hence mental health research can raise additional ethical issues. In other cases scepticism of services makes for reluctant research participants. However, if we accept the case that meaningful research can be based in subjective enquiry then qualitative methods provide a way of giving voice to participants. Qualitative methods offer an effective way of involving service-users in developing interventions for mental health problems 5 ensuring that the questions asked are meaningful to individuals. This may be particularly beneficial if participants are stakeholders, for example potential users of a new service.

Qualitative methods are valuable for individuals who have limited literacy skills who struggle with pencil and paper measures. For example qualitative research has proved fruitful in understanding children's concepts of mental illness and associated services. 6

How qualitative enquiry is used within mental health research

There are a range of types of research question where qualitative methods prove useful – from the development and testing of theory, to the piloting and establishing efficacy of treatment approaches, to understanding issues around translation and implementation into routine practice. Each is discussed in turn.

Development and testing of theory

Qualitative methods are important in exploratory work and in generating understanding of a phenomenon, stimulating new ideas or building new theory. For example, stigma is a concept that is recognised as a barrier to accessing services and also an added burden to mental health. A focus-group study sought to understand the meaning of stigma from the perspectives of individuals with schizophrenia, their relatives and health professionals. 7 From this they developed a four-dimensional theory which has subsequently informed interventions to reduce stigma and discrimination that target not only engagement with psychiatric services but also interactions with the public and work. 7

Development of tools and measures

Qualitative methods access personal accounts, capturing how individuals talk about a lived experience. This can be invaluable for designing new research tools. For example, Mavaddat and colleagues used focus groups with 56 patients with severe or common mental health problems to explore their experiences of primary care management. 8 Nine focus groups were conducted and analysis identified key themes. From these, items were generated to form a Patient Experience Questionnaire, of which the psychometric properties were subsequently examined quantitatively in a larger sample. Not only can dimensions be identified, the rich qualitative data provide terminology that is meaningful to service users that can then be incorporated into question items.

Development and testing of interventions

As we have seen, qualitative methods can inform the development of new interventions. The gold-standard methodology for investigating treatment effectiveness is the randomised controlled trial (RCT), with the principle output being an effect size or demonstration that the primary outcome was significantly improved for participants in the intervention arm compared with those in the control/comparison arm. Nevertheless, what will be familiar for researchers and clinicians involved in trials is that immense research and clinical learning arises from these substantial, often lengthy and expensive research endeavours. Qualitative methods provide a means to empirically capture these lessons, whether they are about recruitment, therapy training/supervision, treatment delivery or content. These data are essential to improve the feasibility and acceptability of further trials and developing the intervention. Conducting qualitative work prior to embarking on an RCT can inform the design, delivery and recruitment, as well as engage relevant stakeholders early in the process; all of these can prevent costly errors. Qualitative research can also be used during a trial to identify reasons for poor recruitment: in one RCT, implementing findings from this type of investigation led to an increased randomisation rate from 40% to 70%. 9

Nesting qualitative research within a trial can be viewed as taking out an insurance policy as data are generated which can later help explain negative or surprising findings. A recent trial of reattribution training for GPs to manage medically unexplained symptoms demonstrated substantial improvements in GP consultation behaviour. 10 However, effects on clinical outcomes were counterintuitive. A series of nested qualitative studies helped shed light as to why this was the case: patients' illness models were complex, and they resisted engaging with GPs (who they perceived as having more simplistic and dualistic understanding) because they were anxious it would lead to non-identification or misdiagnosis of any potential future disease 11 , an issue that can be addressed in future interventions. Even if the insights are unsurprising to those involved in the research, the data collected have been generated systematically and can be subjected to peer review and disseminated. For this reason, there is an increasing expectation from funding bodies that qualitative methodologies are integral to psychosocial intervention research.

Translation and implementation into clinical practice

Trials provide limited information about how treatments can be implemented into clinical practice or applied to another context. Psychological interventions are more effective when delivered within trial settings by experts involved in their development than when they are delivered within clinical settings. 12 Qualitative methods can help us understand how to implement research findings into routine practice. 13

Understanding what stakeholders value about a service and what barriers exist to its uptake is another evidence base to inform clinicians' practice. Relapse prevention is an effective psychoeducation approach that helps individuals with bipolar disorder extend time to relapse. Qualitative methodologies identified which aspects of the intervention service-users and care-coordinators value, and hence, are likely to utilise in routine care. 14 The intervention facilitated better understanding of bipolar disorder (by both parties), demonstrating, in turn, a rationale for medication. Patients discovered new, empowering and less socially isolated ways of managing their symptoms, which had important impacts on interactions with healthcare staff and family members. Furthermore, care-coordinators' reported how they used elements of the intervention when working with clients with other diagnoses. The research also provided insights as to where difficulties may occur when implementing a particular intervention into routine care. For example, for care-coordinators this proved a novel way of working with clients that was more emotionally demanding, thus highlighting the need for supervision and managerial support. 14

Beginners guide to qualitative approaches: one size doesn't fit all

Just as there is a range of quantitative research designs and statistical analyses to choose from, so there are many types of qualitative methods. Choosing a method can be daunting to an inexperienced or beginner-level qualitative researcher, for it requires engaging with new terms and ways of thinking about knowledge. The following summary sets out analytic and data-generation approaches that are used commonly in mental health research. It is not intended to be comprehensive and is provided only as a point of access/familiarisation to researchers less familiar with the literature.

Data generation

Qualitative data are generated in several ways. Most commonly, researchers seek a sample and conduct a series of individual in-depth interviews, seeking participants' views on topics of interest. Typically these last upwards of 45 min and are organised on the basis of a schedule of topics identified from the literature or pilot work. This does not act as a questionnaire, however; rather, it acts as a flexible framework for exploring areas of interest. The researcher combines open questions to elicit free responses, with focused questions for probing and prompting participants to provide effective responses. Usually interviews are audio-recorded and transcribed verbatim for subsequent analysis.

As interviews are held in privately, and on one-to-one basis, they provide scope to develop a trusting relationship so that participants are comfortable disclosing socially undesirable views. For example, in a study of practice nurses views of chronic fatigue syndrome, some nurses described patients as lazy or illegitimate – a view that challenges the stereotype of a nursing professional as a sympathetic and caring person. 15 This gives important information about the education and supervision required to enable or train general nurses to ensure that they are capable of delivering psychological interventions for these types of problems.

Alternatively, groups of participants are brought together for a focus group, which usually lasts for 2 hours. Although it is tempting to consider focus groups as an efficient way of acquiring data from several participants simultaneously, there are disadvantages. They are difficult to organise for geographically dispersed or busy participants, and there are compromises to confidentiality, particularly within ‘captive’ populations (eg, within an organisation individuals may be unwilling to criticise). Group dynamics must be considered; the presence of a dominant or self-professed expert can inhibit the group and, therefore, prevent useful data generation. When the subject mater is sensitive, individuals may be unwilling to discuss experiences in a group, although it often promotes a shared experience that can be empowering. Most of these problems are avoided by careful planning of the group composition and ensuring the group is conducted by a highly skilled facilitator. Lester and colleagues 16 used focus-group sessions with patients and health professionals to understand the experience of dealing with serious mental illness. Though initially participants were observed via focus-group sessions that used patient-only and health professional only groups, subsequently on combined focus groups were used that contained both patients and health professionals. 16 The primary advantage of focus groups is that they enable generation of data about how individuals discuss and interact about a phenomenon; thus, a well-conducted focus group can be an extremely rich source of data.

A different type of data are naturally occurring dialogue and behaviours. These may be recorded through observation and detailed field notes (see ethnography in Table 2 ) or analysed from audio/ video-recordings. Other data sources include texts, for example, diaries, clinical notes, Internet blogs and so on. Qualitative data can even be generated through postal surveys. We thematically analysed responses to an open-ended question set within a survey about medical educators' views of behavioural and social sciences (BSS). 17 From this, key barriers to integrating BSS within medical training were identified, which included an entrenched biomedical mindset. The themes were analysed in relation to existing literature and revealed that despite radical changes in medical training, the power of the hidden curriculum persists. 17

Key features of a range of analytical approaches used within mental health research

Analysing qualitative data

Researchers bring a wide range of analytical approaches to the data. A comprehensive and detailed discussion of the philosophy underlying different methods is beyond the scope of this paper; however, a summary of the key analytical approaches used in mental health research are provided in Table 2 . An illustrative example is provided for each approach to offer some insight into the commonalities and differences between methodologies. The procedure for analysis for all methods involves successive stages of data familiarisation/immersion, followed by seeking and reviewing patterns within the data, which may then be defined and categorized as specific themes. Researchers move back and forth between data generation and analysis, confirming or disconfirming emerging ideas. The relationship of the analysis to theory-testing or theory-building depends on the methodology used.

Some approaches are more common in healthcare than others. Interpretative phenomenological (lPA) analysis and thematic analysis have proved particularly popular. In contrast, ethnographic research requires a high level of researcher investment and reflexivity and can prove challenging for NHS ethic committees. Consequently, it remains under used in healthcare research.

Recruitment and sampling

Quantitative research is interested in identifying the typical, or average. By contrast, qualitative research aims to discover and examine the breadth of views held within a community. This includes extreme or deviant views and views that are absent. Consequently, qualitative researchers do not necessarily (though in some circumstances they may) seek to identify a representative sample. Instead, the aim may be to sample across the range of views. Hence, qualitative research can comment on what views exist and what this means, but it is not possible to infer the proportions of people from the wider population that hold a particular view.

However, sampling for a qualitative study is not any less systematic or considered. In a quantitative study one would take a statistical approach to sampling, for example, selecting a random sample or recruiting consecutive referrals, or every 10th out-patient attendee. Qualitative studies, instead, often elect to use theoretical means to identify a sample. This is often purposive; that is, the researcher uses theoretical principles to choose the attributes of included participants. Healey and colleagues conducted a study to understand the reasons for individuals with bipolar disorder misusing substances. 18 They sought to include participants who were current users of each substance group, and the recruitment strategy evolved to actively target specific cases.

Qualitative studies typically use far smaller samples than quantitative studies. The number varies depending on the richness of the data yielded and the type of analytic approach that can range from a single case to more than 100 participants. As with all research, it is unethical to recruit more participants than needed to address the question at hand; a qualitative sample should be sufficient for thematic saturation to be achieved from the data.

Ensuring that findings are valid and generalisable

A common question from individuals new to qualitative research is how can findings from a study of few participants be generalised to the wider population? In some circumstances, findings from an individual study (quantitative or qualitative) may have limited generalisability; therefore, more studies may need to be conducted, in order to build local knowledge that can then be tested or explored across similar groups. 4 However, all qualitative studies should create new insights that have theoretical or clinical relevance which enables the study to extend understanding beyond the individual participants and to the wider population. In some cases, this can lead to generation of new theory (see grounded theory in Table 2 ).

Reliability and validity are two important ways of ascertaining rigor in quantitative research. Qualitative research seeks to understand individual construction and, by definition, is subjective. It is unlikely, therefore, that a study could ever be repeated with exactly the same circumstances. Instead, qualitative research is concerned with the question of whether the findings are trustworthy; that is, if the same circumstances were to prevail, would the same conclusions would be drawn?

There are a number of ways to maximise trustworthiness. One is triangulation, of which there are three subtypes. Data triangulation involves using data from several sources (eg, interviews, documentation, observation). A research team may include members from different backgrounds (eg, psychology, psychiatry, sociology), enabling a range of perspectives to be used within the discussion and interpretation of the data. This is termed researcher triangulation . The final subtype, theoretical triangulation, requires using more than one theory to examine the research question. Another technique to establish the trustworthiness of the findings is to use respondent validation. Here, the final or interim analysis is presented to members of the population of interest to ascertain whether interpretations made are valid.

An important aspect of all qualitative studies is researcher reflexivity. Here researchers consider their role and how their experience and knowledge might influence the generation, analysis and interpretation of the data. As with all well-conducted research, a clear record of progress should be kept – to enable scrutiny of recruitment, data generation and development of analysis. However, transparency is particularly important in qualitative research as the concepts and views evolve and are refined during the process.

Judging quality in qualitative research

Within all fields of research there are better and worse ways of conducting a study, and range of quality in mental health qualitative research is variable. Many of the principles for judging quality in qualitative research are the same for judging quality in any other type of research. However, several guidelines have been developed to help readers, reviewers and editors who lack methodological expertise to feel more confident in appraising qualitative studies. Guidelines are a prerequisite for the relatively recent advance of methodologies for systematic reviewing of qualitative literature (see meta-synthesis in Table 2 ). Box 1 provides some key questions that should be considered while studying a qualitative report.

Box 1 Guidelines for authors and reviewers of qualitative research (adapted from Malterud 35 )

▶ Is the research question relevant and clearly stated?

Reflexivity

▶ Are the researcher's motives and background presented?

Method, sampling and data collection

▶ Is a qualitative method appropriate and justified?

▶ Is the sampling strategy clearly described and justified?

▶ Is the method for data generation fully described

▶ Are the characteristics of the sample sufficiently described?

Theoretical framework

▶ Was a theoretical framework used and stated?

▶ Are the principles and procedures for data organisation and analysis described and justified?

▶ Are strategies used to test the trustworthiness of the findings?

▶ Are the findings relevant to the aim of the study?

▶ Are data (e.g. quotes) used to support and enrich the findings?

▶ Are the conclusions directly linked to the study? Are you convinced?

▶ Do the findings have clinical or theoretical value?

▶ Are findings compared to appropriate theoretical and empirical literature?

▶ Are questions about the internal and external validity and reflexivity discussed?

▶ Are shortcomings of the design, and the implications these have on findings, examined?

▶ Are clinical/theoretical implications of the findings made?

Presentation

▶ Is the report understandable and clearly contextualised?

▶ Is it possible to distinguish between the voices of informants and researchers?

▶ Are sources from the field used and appropriately referenced?

Conclusions and future directions

Qualitative research has enormous potential within the field of mental health research, yet researchers are only beginning to exploit the range of methods they use at each stage of enquiry. Strengths of qualitative research primarily lie in developing theory and increasing understanding about effective implementation of treatments and how best to support clinicians and service users in managing mental health problems. An important development in the field is how to integrate methodological approaches to address questions. This raises a number of challenges, such as how to integrate textual and numerical data and how to reconcile different epistemologies. A distinction can be made between mixed- method design (eg, quantitative and qualitative data are gathered and findings combined within a single or series of studies) and mixed- model study, a pragmatist approach, whereby aspects of qualitative and quantitative research are combined at different stages during a research process. 19 Qualitative research is still often viewed as only a support function or as secondary to quantitative research; however, this situation is likely to evolve as more researchers gain a broader skill set.

Though it is undeniable that there has been a marked increase in the volume and quality of qualitative research published within the past two decades, mental health research has been surprisingly slow to develop, compared to other disciplines e.g. general practice and nursing, with relatively fewer qualitative research findings reaching mainstream psychiatric journals. 2 This does not appear to reflect overall editorial policy; however, it may be partly due to the lack of confidence on the part of editors and reviewers while identifying rigorous qualitative research data for further publication. 20 However, the skilled researcher should no longer find him or herself forced into a position of defending a single-methodology camp (quantitative vs qualitative), but should be equipped with the necessary methodological and analytical skills to study and interpret data and to appraise and interpret others' findings from a full range of methodological techniques.

  • Crawford MJ ,
  • Cordingley L
  • Dowrick C ,
  • Edwards S ,
  • ↵ MRC Developing and Evaluating Complex Interventions 2008
  • Nelson ML ,
  • Quintana SM
  • Schulze B ,
  • Angermeyer MC
  • Mavaddat N ,
  • Lester HE ,
  • Donovan J ,
  • Morriss R ,
  • Barkham M ,
  • Stiles WB ,
  • Connell J ,
  • Chew-Graham C ,
  • England E ,
  • Kinderman P ,
  • Tashakkoria A ,
  • Ritchie J ,
  • Ssebunnya J ,
  • Chilvers R ,
  • Glasman D ,
  • Finlay WM ,
  • Strauss A ,
  • Hodges BD ,
  • Dobransky K
  • Dixon-Woods M ,
  • Fitzpatrick R ,
  • Espíndola CR ,

Read the full text or download the PDF:

  • Open access
  • Published: 22 March 2024

Development of an interdisciplinary training program about chronic pain management with a cognitive behavioural approach for healthcare professionals: part of a hybrid effectiveness-implementation study

  • Wouter Munneke 1 , 2 , 3 ,
  • Christophe Demoulin 3 ,
  • Jo Nijs 1 , 2 , 4 , 5 ,
  • Carine Morin 6 ,
  • Emy Kool 7 ,
  • Anne Berquin 8 ,
  • Mira Meeus 2 , 9 &
  • Margot De Kooning 1 , 2  

BMC Medical Education volume  24 , Article number:  331 ( 2024 ) Cite this article

19 Accesses

Metrics details

Many applied postgraduate pain training programs are monodisciplinary, whereas interdisciplinary training programs potentially improve interdisciplinary collaboration, which is favourable for managing patients with chronic pain. However, limited research exists on the development and impact of interdisciplinary training programs, particularly in the context of chronic pain.

This study aimed to describe the development and implementation of an interdisciplinary training program regarding the management of patients with chronic pain, which is part of a type 1 hybrid effectiveness-implementation study. The targeted groups included medical doctors, nurses, psychologists, physiotherapists, occupational therapists, dentists and pharmacists. An interdisciplinary expert panel was organised to provide its perception of the importance of formulated competencies for integrating biopsychosocial pain management with a cognitive behavioural approach into clinical practice. They were also asked to provide their perception of the extent to which healthcare professionals already possess the competencies in their clinical practice. Additionally, the expert panel was asked to formulate the barriers and needs relating to training content and the implementation of biopsychosocial chronic pain management with a cognitive behavioural approach in clinical practice, which was complemented with a literature search. This was used to develop and adapt the training program to the barriers and needs of stakeholders.

The interdisciplinary expert panel considered the competencies as very important. Additionally, they perceived a relatively low level of healthcare professionals’ possession of the competencies in their clinical practice. A wide variety of barriers and needs for stakeholders were formulated and organized within the Theoretical Domain Framework linked to the COM-B domains; ‘capability’, ‘opportunity’, and ‘motivation’. The developed interdisciplinary training program, including two workshops of seven hours each and two e-learning modules, aimed to improve HCP’s competencies for integrating biopsychosocial chronic pain management with a cognitive behavioural approach into clinical practice.

We designed an interdisciplinary training program, based on formulated barriers regarding the management of patients with chronic pain that can be used as a foundation for developing and enhancing the quality of future training programs.

Peer Review reports

Introduction

Chronic pain affects approximately 20% of the population worldwide [ 1 ]. Chronic pain has a tremendous personal and socioeconomic impact: it causes the highest number of years lived with disability [ 2 ] and is the largest cause of work-related disability [ 3 , 4 ]. The intensity, functional impact and persistence of pain are influenced by biopsychosocial factors [ 5 , 6 , 7 , 8 , 9 ]. Factors such as comorbidities, physical well-being, behaviour, psychosocial well-being and environmental aspects can all influence the pain a person experiences [ 5 , 6 , 7 , 8 , 9 ]. This understanding of chronic pain has shifted management strategies from pure biomedical treatments to multimodal approaches acknowledging the complex biopsychosocial nature of chronic pain.

Nonetheless, integrating biopsychosocial chronic pain management is complex. As a consequence, many applied treatments remain biomedically oriented and defined as low-value care [ 10 ], resulting in poorer pain, activity and work-related outcomes [ 11 , 12 , 13 ]. In addition, patients often consider their treatment to be inadequate [ 1 , 14 , 15 , 16 ]. With decades of education, dozens of guidelines and many good intentions to improve care, the gap between science and clinical care remains, which limits the implementation of biopsychosocial chronic pain management in clinical practice. There are multifactorial reasons why clinical guidelines are poorly adhered to by HCPs, e.g. lack of knowledge regarding pain and pain management [ 17 , 18 , 19 , 20 , 21 , 22 , 23 ], HCPs feel that their skills and confidence are insufficient to change their behaviour, which is sometimes also not applicable in their clinical practice [ 24 , 25 , 26 , 27 ]. Furthermore, patient ability and preferences also affect HCPs’ guideline adherence [ 21 , 28 , 29 ].

Postgraduate training programs could lower these barriers by improving HCPs’ knowledge, skills and confidence to facilitate behavioural change. Studies indicate that educational interventions resulted in more guideline-adherent’ recommendations regarding activity, bed rest and imaging referral [ 30 ] and on actual referral behaviour [ 31 ] than solely providing clinical guidelines, although French et al. (2013) found significant differences in guideline-adherent imaging recommendations but not in actual imaging behaviour [ 30 ]. In addition to improved guideline adherence, training programs are effective in improving HCPs’ knowledge and skills regarding the management of pain with effect sizes ranging from small to large [ 32 , 33 , 34 , 35 , 36 , 37 ]. However, this effect can decline over time [ 38 ]. Most educational training programs were applied to monodisciplinary groups of HPCs, while there is a need for interdisciplinary training to facilitate interdisciplinary collaboration within healthcare [ 20 , 39 , 40 ]. In addition, interdisciplinary collaboration in clinical practice is associated with improved psychosocial attitudes and might therefore benefit the mid- and long-term effectiveness of training programs [ 39 , 41 , 42 ]. However, little is known about the impact of interdisciplinary postgraduate pain educational training programs, especially when focusing on chronic pain. Given the established need for interdisciplinary educational training programs to improve interdisciplinary collaboration within healthcare [ 20 , 39 , 40 ], the lack of studies examining the impact of interdisciplinary postgraduate chronic pain training educational programs represents a significant knowledge gap. Such interdisciplinary postgraduate chronic pain training programs are also challenging, as they have to be applicable to all HCPs. Here, we aimed to address the significant knowledge gap by developing an interdisciplinary training program about chronic pain for HCPs.

For the reasons outlined above, within this study, we describe the development of an interdisciplinary training program about chronic pain for HCPs. First, an interdisciplinary expert panel was organised to identify barriers and needs expressed by stakeholders for such an interdisciplinary chronic pain training program. Second, the identified barriers and needs of stakeholders for a chronic pain training program were used for the development of an interdisciplinary training program regarding the management of patients with chronic pain. This study is part of a type 1 hybrid implementation study to evaluate the impact of an interdisciplinary training program about chronic pain on HCPs’ knowledge, attitudes, and to assess the determinants of implementation behaviour.

The study was approved by an independent Medical Ethical Committee (EC-2021-327) linked to the University Hospital of Brussels, Brussels, Belgium and was in accordance with the Guideline for Reporting Evidence-based practice Educational interventions and Teaching (GREET) [ 43 ], Template for Intervention Description and Replication (TIDieR) checklist [ 44 ] and Standards for Reporting Implementation Studies (StaRi) Statement [ 45 ].

Belgian context

Belgium is a European country with 11.7 million inhabitants and is divided into three regions: Flanders – official language Dutch -, Brussels official language Dutch and French - and Wallonia – official language French. Belgium has a federal government (Federal Public Service) that manages substantial parts of public health. Each region has its own governance with powers in fields that are connected with its region. In 2019, 7.9% (€37.2 billion) of the Belgian Gross Domestic Product, is spent on health [ 46 ]. In 2022, Belgium had approximately 61.858 medical doctors, 41.535 physiotherapists, 13.255 nurses 210.079 dentists, 22.508 pharmacists, 14.478 occupational therapists and 14.641 clinical psychologists [ 47 ]. However, these are registered HCPs and do not represent all practising HCPs. Most of the care is coordinated by primary care doctors, and access to a physiotherapist or occupational therapist requires a referral. Care will require expenses by the patient because it is partly reimbursed by health insurance – which is mandatory for all inhabitants. Approximately 23% of the Belgian population has chronic pain [ 1 ]. Among primary care doctor practices, chronic pain patients account for 33 to 49% of the consultation, with 81% reporting pain lasting for more than a year [ 48 ]. Moreover, pain serves as the primary motive for consultation in 78% of (sub)acute patients and 54% of chronic pain patients [ 48 ].

The study consortium consists of three partners: an international research group, Pain in Motion, administratively embedded at VUB in collaboration with Université de Liège, Ghent University, Antwerp University and Université Catholique de Louvain; and two primary care doctors associations - SSMG and Domus Medica - who represent Dutch and French-speaking primary care doctors in Belgium. The Belgian Federal Public Service of Health, Food Chain Safety and Environment funded this study. Together with affiliated healthcare policy organisations, the Federal Public Service was represented in a guidance committee. This committee supervised the progress of the study and provided feedback based on reports and presentations by the study consortium.

Pain management competencies

Pain management competencies were used as a basis to determine if they were appropriate to guide the development of the training program, to assess the extent healthcare providers meet this standard and as learning outcomes for the training program. The competencies were based on the book Explain Pain [ 49 ] which aims to demystify the process of understanding and managing pain. This was requested within the funding application of the Belgian Federal Public Service of Health, Food Chain Safety and Environment. Subsequently, the consortium worked collaboratively to refine and formulate these competencies until consensus was achieved among the members who applied for the grant (JN, CD, MDK, MM, & AB). The pain management competencies were:

Understand acute and chronic pain within a biopsychosocial framework

Understand the difference between pain and nociception and acute and chronic pain.

Recognize that the purely biomedical model is out-of-date and that the biopsychosocial model of pain should be adopted.

Assess patients with (chronic) pain comprehensively

Use questionnaires and interviews to identify patients’ biopsychosocial factors which might influence pain experience according to the PSCEBSM model [ 9 ] (pain–somatic factors – cognitive factors – emotional factors – behavioural factors – social factors – motivation).

Assess the patients’ resources, obstacles to improvement, and their “readiness to change”.

Integrate contemporary pain science into clinical reasoning in patients with chronic pain

Incorporate patients' biopsychosocial factors when making decisions regarding chronic pain type (e.g. nociceptive, neuropathic and/or nociplastic pain), patients’ evaluation and care request.

Design multimodal treatment programs, either mono- or interdisciplinary, according to the patients’ representations, beliefs, expectations and needs, e.g. stress self-management program, graded activity program, graded exposure, education/reassurance, etc.

Provide tailored and patient-centred strategies to subacute and chronic pain patients

Educational strategies:

Understand that pain science education (PSE) is a continuous process;

Use communication skills to favour therapeutic alliance;

Master pain neurophysiology and the biology behind different pain mechanisms to be able to explain pain to patients by means of metaphors and tools.

Use a patient-centred approach to define specific goals that are meaningful to the patient.

Manage obstacles to improve the patient’s motivation to change.

Teach patients pain coping skills aligned with the ideas delivered during PSE.

Understand the role of HCPs in an interdisciplinary perspective

Understand other healthcare disciplines' roles in successfully managing chronic pain.

Communicate adequately with other HCPs about the management of chronic pain.

Interdisciplinary expert panel

Knowing the priority groups’ setting and the barriers and needs to change is essential to achieve successful implementation [ 50 , 51 , 52 , 53 , 54 ]. We selected priority groups with HCPs working in primary care since these are the first HCPs in contact with patients with chronic pain. We selected primary care doctors, (home)nurses, psychologists, physiotherapists, occupational therapists, dentists and pharmacists. Although we focused on priority groups, the training program was accessible for all HCPs.

An interdisciplinary expert panel was organised included 21 experts: a Dutch and a French-speaking expert for each priority group, two pain centre specialists, two heads of pain centres, a member of a patient association and a member of a Belgian organisation that focuses on guideline implementation.

The interdisciplinary expert panel completed an online questionnaire in which they indicated the importance of the established competencies. Additionally, they were asked to provide their perceptions of the extent to which Belgian HCPs already possess the competencies in their clinical practice. Furthermore, the expert panel was asked to formulate barriers and needs relating to training content and the implementation of biopsychosocial chronic pain management with a cognitive behavioural approach in clinical practice within Belgian healthcare, in line with contemporary pain science. They were asked to provide the barriers and needs at the level of HCPs, patients, organisations and the healthcare system. All answers regarding barriers and needs through the online questionnaire were included. The answers were accompanied by a literature search and discussed during the first meeting to provide a deeper understanding of the barriers, needs and specific context variables relevant to the implementation study. We used a framework to guide and organise the barriers and needs, and to characterise interventions and policies to change behaviour [ 55 ]. This framework consists of the Theoretical Domain Framework, containing 14 domains regarding behavioural change, which were mapped into the COM-B model. The COM-B model is a guide to design interventions, and include the domains ‘capability’, ‘opportunity’, and ‘motivation’ [ 56 ]. Three online meetings with the expert panel were organised, one to discuss the barriers and needs, one to evaluate the patient materials and one to evaluate the training program prior to implementation. The expert panel received an update about the results of the training program after the completion of the implementation process.

Chronic pain training program

An original and interactive blended learning training program was developed including two e-learning modules and two face-to-face workshops based on the barriers and needs formulated by the literature search and expert panel. The training program aimed to improve HCP’s competencies for integrating biopsychosocial chronic pain management with a cognitive behavioural approach into clinical practice. Both a Dutch and French version was developed. Each e-learning module last approximately 1 h, and the each workshop 7 h. This amount of training hours is commonly applied and reported to be effective in changing knowledge, attitudes and determinants of implementation behaviour [ 57 , 58 ].

The e-learning modules provided the theoretical basis to the participants and maximised the time for interactions and skills training during the workshops. The two workshops – in interdisciplinary groups - were designed to focus on skill training and practical implementation of biopsychosocial model and improved communication techniques and PSE for a cognitive behavioural approach in clinical practice, because this is applicable and essential to all HCPs [ 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 ]. Approximately a month was planned between both workshops so participants can practice in their clinical practice and their experience can be discussed during their second workshop. We used a variety of educational methods, such as interactive lessons, video materials, local opinion leaders [ 67 ], demonstrations, illustrations, assignments, skills training, clinical reasoning training, goal settings, role playing, case studies and interdisciplinary discussions, and peer- and teacher feedback to improving the learning process [ 67 , 68 , 69 , 70 ]. Interdisciplinary collaborative exercises were applied to facilitate uniformity in communication and chronic pain management approach, and improved collaboration in clinical practice. These methods were used to reduce the barriers and accommodate the needs formulated by the expert panel to implement the biopsychosocial model, corresponding to HCPs’ current best-evidence approach in line with modern pain sciences [ 41 , 69 ]. Both workshops included mandatory phases in combination with optional phases that could be adapted to the expectations and needs of the participants.

After participating in the training program, participants were asked if they were interested in sharing their name, work address(es) and contact details. With this information, an interactive map was developed and shared with all participants to improve their interdisciplinary collaboration. The local trainers aimed to facilitate a sustainable change by acting as a chronic pain resource person for the HCPs in the geographic areas after the implementation study.

Patient materials

Patient materials were developed to support the integration of the biopsychosocial model and PSE in clinical practice and the quality of PSE for patients with chronic pain. The patient materials included posters, a patient booklet – which was an update from an existing PSE booklet [ 71 ] - and videos explaining pain were created by collaborating with the Retrain Pain Foundation by making videos from their PSE slides [ 72 ]. A panel of five Dutch-speaking and five French-speaking patients with chronic pain were organised to co-design these materials. These patients were recruited from two chronic pain patient organisations and within the university hospital of Brussels (UZ Brussel). The patient panel discussed patients’ needs, information and messages that were important to patients and provided feedback on the developed materials. The patient materials discusses the impact of pain, why we feel pain, the difference between acute and chronic pain, the role of the nervous system and the brain, an overprotective alarm system and contributing factors, and how to manage chronic pain (e.g. improve understanding about pain, beliefs and expectations, active lifestyle, stress management, social life, sleep, positive and negative effects of medication, self-management and the support from HCPs. The patient materials were evaluated based on the following criteria: ‘clarity’, ‘content’, ‘usefulness’, ‘layout’, ‘understandability’, ‘added value or not’, ‘consistency’ and ‘suggestions for improvements’ by the expert panel and patient panel. All materials were updated based on their feedback to improve quality.

Trainer recruitment and train-the-trainer workshop

Each training was provided by a pair of teachers: an expert teacher and a local expert. The experts were affiliated with the consortium, graduated as HCPs, had experience with teaching, and were familiar with chronic pain, the biopsychosocial model and PSE. The local experts were HCPs working in the geographic area of training implementation and helped to tailor the training program to the local context, i.e. taking into account the sociocultural diversity of the patient population in the geographic area and the local, formal and informal networks of HCPs. The criteria for local trainer were as follows: fluent in Dutch or French, three days a week of work with patients with chronic pain in the geographic areas of implementation, expertise in chronic pain, a biopsychosocial perspective, ability to participate in the train-the-trainer workshop, and ability to provide at least two workshops.

The train-the-trainer workshops were implemented to secure the quality of the trainers and to ensure that the trainers’ knowledge and attitudes were in line with the training content. It included online one-on-one training sessions and discussions about chronic pain organised by the expert trainer with whom the local trainer forms a training duo. This personal train-the-trainer workshop provided the opportunity to adapt it to the needs of the expert and local trainer. In addition, group meeting(s) with other local trainers were organised for more general discussions to ensure that the core of the training program was the same for all training duos. At the end of the train-the-trainer workshop, all trainers completed the Knowledge And Attitudes of Pain questionnaire to assess their level of knowledge and attitudes toward pain in line with modern pain science [ 73 , 74 ]. Trainers received a fee of €350 for participating in the train-the-trainer workshop and a fee of €600 for each day of provided workshops for HCPs.

Recruitment of healthcare professionals

We aimed to train 500 HCPs at minimum within a total of 25 groups with approximately 20 to 25 HCPs — five training groups in each implementation area; Antwerp, Gent (both Flanders), Brussels (Brussels), Namur, and Liege (both Wallonia). We prioritised recruitment of HCPs working in the cities where we implemented the training to facilitate interdisciplinary collaboration during and after the training program. If there were still available spots for a training group a month prior to the training date, the recruitment was expanded to a wider geographical area. Therefore, all HCPs in Belgium were eligible to register for the training program. HCPs were recruited through multiple methods and networks. The consortium collaborated with organisations associated with HCPs in primary care, the Federal Public Service, and organisations connected to the study to recruit HCPs. All organisations shared information and flyers on their website, magazines, social media and/or within their network.

Participants received continuing education credits for participating in the training program to stimulate participation. The cost of the training programs was covered within the funding. Therefore, the training was free for participants, making the training also accessible for HCPs with fewer financial resources. In addition, the training program was implemented at various days of the week - Monday to Saturday - and various periods of the day - morning and afternoon or afternoon and evening - so that it enabled most HCP to participate within their work scheme.

Data collection and evaluation

HCPs were recruited from August 2021 to May 2022 and October 2022 to June 2023. Workshops were organised from October 2021 to June 2022 and March 2023 to July 2023. Within this study, the results of implementing the training program will be analysed and reported in separate papers. These separate papers will report the short and mid-term changes in HCPs’ knowledge, attitudes and guideline adherence regarding chronic pain and HCP’s confidence regarding low back pain. In addition, we will assess HCP’s barriers and needs of integrating the cognitive behavioural approach. Furthermore, HCPs’ training satisfaction will be evaluated after each workshop and after six months. All HCPs who enrolled in the training program were invited to take part in the studies. Each participant was requested to complete an informed consent form.

Interdisciplinary expert panels’ perception towards competencies

Within the interdisciplinary expert panel, 17 of the 21 members completed the questionnaire in which they indicated their perceptions of the importance of the competencies and the extent to which Belgian HCPs already possess the competencies in their clinical practice. The expert panel considered 9 competencies as ‘very important’ to ‘extremely important’, see Fig.  1 . One of the main competence – ‘integrate contemporary pain neuroscience into clinical reasoning in patients with chronic pain – and a sub competence ‘Use questionnaires and interviews to identify patients’ biopsychosocial factors which might influence pain experience according to the PSCEBSM model - were rated between ‘moderately important’ and ‘very important’. Originally, the questionnaire asked for the importance of integrating contemporary pain neuroscience into clinical reasoning. During the meeting, the expert panel recommended that ‘integrating pain neuroscience into clinical reasoning’ was seen as important when pain science does not solely focus on neurophysiology. Therefore, the competence was changed to ‘pain science’. The importance regarding the use of questionnaires were seen as less important compared to other competencies. Its perception the extent to which Belgian HCPs already possess the competencies in their clinical practice ranged from ‘neutral’ to ‘agree’. This showed that there was large room for improvement on all competencies and that the training program needed to take the low competence in account within the training program. This was done by discussing the importance of the competencies and making it accessible and understandable for HCPs who have less experience and possession of the competencies in their clinical practice.

figure 1

Expert panels’ perception towards the importance and HCPs’ possession of competencies in clinical practice. Importance of competencies: 1 = not important at all, 2 = slightly important, 3 = moderately important, 4 = very important, 5 = extremely important. HCPs’ possession of competencies: 1 = totally disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = totally agree. Higher scores reflect higher importance and stronger possession of HCPs’ competencies in clinical practice. PSCEBSM = pain – somatic factors - cognitive factors – emotional factors – behavioural factors – social factors – motivation

Barriers and needs

All 21 members of the interdisciplinary expert panel completed the questionnaire or participated in the meeting relating to stakeholders’ barriers and needs concerning training content and the implementation of chronic pain management with a cognitive behavioural approach in clinical practice within Belgian healthcare, in line with contemporary pain science. The questionnaire and meeting with the interdisciplinary expert panel and literature search identified a large variety of barriers and needs which are presented in the Theoretical Domain Framework for behavioural change linked to COM-B domains, see Table  1 .

In summary, the barriers and needs reflected the importance of the competencies. Based on the domain of psychological capabilities, the training program needed to improve HCPs’ knowledge and especially skills related to a biopsychosocial approach and interdisciplinary collaboration for the management of patients with chronic pain. It was advised to develop a general chronic pain course which was not too complex, however, there was a stronger need to focus on improving skills than improving knowledge.

The social and physical opportunities domain showed that many environment factors, such as the biomedical perspectives of healthcare and society, and the lack of biopsychosocial education regarding pain, could limit the acceptance of the biopsychosocial model by the participants. In addition, it showed implications for implementation in clinical practice, such as lack of time, resources and support for HCPs and patients. Furthermore, based on the domain of motivation, many HCPs have a lack of interest in the management of patients with chronic pain and interdisciplinary collaboration. In addition, HCPs have less confidence in assessing psychosocial factors, believe that patients have less interest in a biopsychosocial approach and pain education, do not encourage patient goals focused on self-management and quality of life, and have negative emotions relating to pain management.

Training program

E-learning modules.

The first e-learning module - of approximately one hour - aimed at achieving competencies 1, 2 and 3 (1. Understand acute and chronic pain within a biopsychosocial framework; 2. Assess patients with (chronic) pain comprehensively; 3. Integrate contemporary pain science into clinical reasoning in patients with chronic pain). It included an “introduction” part explaining the rationale and learning outcomes of the teaching programme and necessary basic theoretical parts, e.g. the impact of chronic pain on patients and society, definitions of pain, physiology of acute pain and chronic pain, the biopsychosocial model, biopsychosocial factors related to chronification and persistence of pain (e.g. stress, anxiety, catastrophising, depression, misbeliefs, insomnia, inactivity, etc.), and types of pain (nociceptive, neuropathic and nociplastic pain).

The second e-learning module aimed at achieving competencies 3, 4 and 5 (3. Integrate contemporary pain science into clinical reasoning in patients with chronic pain; 4. Provide tailored and patient-centred strategies to subacute and chronic pain patients; 5. Understand the role of HCPs in an interdisciplinary perspective).

This module started with a summary of the first e-learning module, after which it introduced patient-centred approach, attitudes, beliefs, motivation and coping of patients, PSE strategies, metaphors, the importance of the words used with patients, goal-setting, obstacles for change, motivational interviewing, self-management and lifestyle, needs and expectations of patients, commonly applied modalities/treatments (e.g. imaging, medication, hands-on techniques, and exercise) and the mono- and interdisciplinary approach in the management of chronic pain.

The e-learning modules used interactive educational methods to activate the participants’ prior knowledge and experience together with an efficient integration with what is new. The content was delivered through video animations, expert interviews and short texts. Reflection questions complemented the content during and after slides and within a test at the end of each session (such as quizzes, multiple-choice tests and open questions on which the participants received automated feedback).

Face-to-face workshops

The key aspects of the training program were a biopsychosocial pain assessment, specific patient-centred communication techniques and biopsychosocial treatment programs integrating PSE. The interdisciplinary training program can be found in Online Resource 1.

The first workshop aimed to provide knowledge and skills needed to integrate biopsychosocial (pain) assessment of patients successfully and to give the first introduction to PSE in their practice and to integrate the model and contemporary pain science into clinical reasoning in patients with chronic pain (competencies 1–4). The workshop included lecturing, exercises, interdisciplinary group discussions, and skills training relating to pain assessments, communication, PSE and their barriers and needs to implementing in their clinical practice. After the first workshop, participants received exercises to implement and practice biopsychosocial pain assessment, specific patient-centred communication techniques and PSE in their clinical practice. Participants received a poster providing key messages for patients regarding chronic pain, a patient booklet to support PSE in their clinical setting and the link to the patient videos. All French and Dutch patient materials can be found on the website of Pain in Motion http://www.paininmotion.be/patients/information-about-persistent-pain .

The second workshop aimed to provide the ability to tailor and apply patient-centred strategies to subacute and chronic pain and to understand the role of HCPs from an interdisciplinary perspective. The workshop included lecturing, exercises, interdisciplinary group discussions, and skills training relating to providing PSE, motivational interviewing, patient-centred approach, mono-/interdisciplinary approach and communication between HCPs.

Both workshops contained nine mandatory phases with objectives per phase and two optional phases to adapt the training to the needs of the participants in the group. We evaluated if these phases were applied and achieved through discussions with participants and questions and observations by the trainers. The degree to which the participants were satisfied with the workshops was evaluated by a satisfaction questionnaire after each workshop.

Adaptations during the implementation process

The workshops were slightly adapted during the process of implementation. However, the core elements of the workshops remained the same. After the first three workshop groups, a group discussion about the factors influencing pain at the start of the first workshop was removed because participants thought it had less added value in addition to the e-learning modules. Furthermore, participants wanted more time for PSE exercises, so a motivational interviewing exercise was moved to the second workshop. In the second workshop, a motivational interviewing exercise was simplified due to difficulties experienced by participants. Furthermore, during the implementation process, minor adjustments were made in slides to support teachers’ lecturing.

For the first four workshop groups, we aimed to recruit approximately 20 HCPs for each group. However, many participants cancelled last minute due to situations relating to COVID-19. Therefore, in agreement with the trainers, group sizes were increased to approximately 25 for the remaining 11 workshop groups to train a minimum of 300 HCPs but assure the quality of the training program.

The developed interdisciplinary training program regarding the management of patients with chronic pain included a two 7-hour workshops and two e-learning modules - aimed to improve HCP’s competencies for integrating biopsychosocial chronic pain management with a cognitive behavioural approach into clinical practice. A large variety of barriers and needs were formulated - by the interdisciplinary expert panel and literature search - relating to training content and the implementation of chronic pain management with a cognitive behavioural approach in clinical practice. This provided valuable insight into the challenges for the implementation study and for HCPs, which was used to adapt the training program to the Belgian context. This study is part of a type 1 hybrid implementation study to assess the impact of such chronic pain training programs on the knowledge, attitudes and behaviour of HCPs regarding chronic pain management, aiming for higher value care for patients with chronic pain [ 82 ].

Recently, Slater et al. (2022) designed a framework in Australia, which is a blueprint for shaping interdisciplinary training about chronic pain with patients, HCPs and pain educators [ 83 ]. This framework identified gaps and training targets based on priorities in pain care. Although this study was performed in the Australian context, the identified gaps and training targets are closely aligned with the competencies and content of the training program. It is therefore most likely that our competencies and related barriers and needs are generalizable for many contexts in healthcare worldwide. However, it remains unknown what the optimal dose, intensity and frequency of trainings are needed to address these barriers and needs and to obtain the competencies. Our training program lasted two days, which is a commonly applied duration and has been effective in previous studies to obtain the competencies by improving knowledge, attitudes and behaviour of HCPs [ 37 , 38 , 58 , 84 ]. Other studies used training programs ranging from a workshop of multiple hours [ 32 , 84 ], multiple workshops of a few hours [ 36 ] to multiple days [ 85 , 86 ]. These studies - with both fewer and more hours of workshops - found significant improved knowledge and skills regarding pain knowledge or to educate patients about pain, indicating that obtaining the competencies is feasible. However, the training programs were monodisciplinary and a detailed training program was not published, making it difficult to compare. Konsted et al. (2019) published a brief training program that aimed to support physiotherapists and chiropractors’ integration of the biopsychosocial low back pain management with a cognitive behavioural approach in clinical practice [ 85 ]. This training program also included two-day workshops, had similar competences to obtain and a similar mix of theoretical and skills training, was shown to be feasible and effective in changing clinical behaviour [ 57 , 87 ]. In addition to the training programs reported above, our training program included two e-learning modules to support the workshops, which potentially improved the learning experience and satisfaction of participants [ 88 ]. To our knowledge, no other interdisciplinary training program plans are available on the topic of pain.

A strength of this study was the co-design with a large interdisciplinary expert panel who formulated barriers and needs of stakeholders and the use of a framework to organise factors relating to behavioural change [ 56 ]. These addressed barriers and needs, together with a blended learning design and interactive teaching methods, improved the quality of the training for HCPs in Belgium [ 51 , 52 ]. Furthermore, the two-day training program available for all HCPs and targeted for seven disciplines makes it feasible to implement and scale-up for a large population of HCPs and many healthcare systems. Furthermore, the training program was updated during the implementation process to improve the training based on the experiences of the trainers and participants. Another strength is the availability of patient materials - which was developed with a patient panel - as support for HCPs to integrate PSE within clinical practice. Lastly, the training program was implemented in five different areas of Belgium, in two different languages, and is available in Dutch, French and English. However, this study also has several limitations. A more intensive co-design throughout the process with experts and patients may have improved the quality of the training program. Furthermore, the formulated barriers and needs were based on a literature search and the expert panel; no systematic literature review was conducted, which could have resulted in some barriers and needs being missed. Besides, the estimated pre-intervention HCPs’ possession of competencies in their clinical practice was based on the expert panels’ perception and was not based on a large scale survey. Moreover, the training program includes several learning outcomes related to competencies that pose challenges to assess or which are not covered by the initial evaluation plan. Consequently, determining the achievement of some learning outcomes within this implementation study may remain inconclusive.

This study can potentially serve as a foundation for future training, thereby saving the time and resources required to develop training programs de novo. However, training programs need to be further developed and cross-culturally adapted within the geographic areas of implementation. To improve this process, more training programs should be available to facilitate learning from other training programs, e.g. to provide insight into how many hours of practical training is desired or which elements of the training facilitate learning the most effective. By reducing the differences between postgraduate training programs, we might also reduce the differences in knowledge and attitudes between HCPs and potentially improve their interdisciplinary collaboration [ 89 ]. Many factors play an important role in the learning experience of HCPs and their behaviour change, and many factors seem poorly understood. Hence, the publication of training programs by projects and studies should be encouraged, and the effectiveness of such training programs and their implementation process in clinical practice should be assessed. Furthermore, studies are needed to compare the effect of interdisciplinary versus monodisciplinary training programs. Although interdisciplinary training groups can facilitate interdisciplinary collaboration, they may introduce variation in the learning effect, as training that focuses on knowledge or skills may not be equally relevant across disciplines [ 90 ].

To address the significant knowledge gap of studies examining the effectiveness of interdisciplinary postgraduate chronic pain training programs, as well as the established need for interdisciplinary training to improve interdisciplinary collaboration within healthcare, an interdisciplinary training program was developed to improve HCP’s competencies for integrating biopsychosocial chronic pain management with a cognitive behavioural approach into clinical practice for the treatment of patients with chronic pain. To do so, an interdisciplinary expert panel was created to identify the barriers and needs of stakeholders for such a chronic pain training program. The identified barriers and needs of stakeholders for a chronic pain management training program were used for the development of the interdisciplinary pain management training program. In addition, the training program can be used as a foundation for developing and enhancing the quality of future training programs.

Availability of data and materials

The complete and more detailed training program and materials are available in French and Dutch from the corresponding author on reasonable request.

Abbreviations

Healthcare professional

Pain science education

Pain – somatic factors - cognitive factors - emotional factors - behavioural factors - social factors – motivation

Breivik H, Collett B, Ventafridda V, Cohen R, Gallacher D. Survey of chronic pain in Europe: prevalence, impact on daily life, and treatment. Eur J Pain. 2006;10(4):287–333. https://doi.org/10.1016/j.ejpain.2005.06.009 .

Article   Google Scholar  

Global Burden of Disease Study C. Global, regional, and national incidence, prevalence, and years lived with disability for 301 acute and chronic diseases and injuries in 188 countries, 1990–2013: a systematic analysis for the global burden of disease study 2013. Lancet (London, England). 2015;386(9995):743–800. https://doi.org/10.1016/s0140-6736(15)60692-4 .

Andersson GB. Epidemiological features of chronic low-back pain. Lancet (London England). 1999;354(9178):581–5. https://doi.org/10.1016/S0140-6736(99)01312-4 .

Waddell G, Burton AK. Occupational health guidelines for the management of low back pain at work: evidence review. Occup Med (Lond). 2001;51(2):124–35. https://doi.org/10.1093/occmed/51.2.124 .

Fillingim RB. Individual differences in pain responses. Curr Rheumatol Rep. 2005;7(5):342–7. https://doi.org/10.1007/s11926-005-0018-7 .

Lumley MA, Cohen JL, Borszcz GS, Cano A, Radcliffe AM, Porter LS, et al. Pain and emotion: a biopsychosocial review of recent research. J Clin Psychol. 2011;67(9):942–68. https://doi.org/10.1002/jclp.20816 .

Meeus M, Nijs J. Central sensitization: a biopsychosocial explanation for chronic widespread pain in patients with fibromyalgia and chronic fatigue syndrome. Clin Rheumatol. 2007;26(4):465–73. https://doi.org/10.1007/s10067-006-0433-9 .

McLean SA, Clauw DJ, Abelson JL, Liberzon I. The development of persistent pain and psychological morbidity after motor vehicle collision: integrating the potential role of stress response systems into a biopsychosocial model. Psychosom Med. 2005;67(5):783–90. https://doi.org/10.1097/01.psy.0000181276.49204.bb .

Wijma AJ, van Wilgen CP, Meeus M, Nijs J. Clinical biopsychosocial physiotherapy assessment of patients with chronic pain: the first step in pain neuroscience education. Physiother Theory Pract. 2016;32(5):368–84. https://doi.org/10.1080/09593985.2016.1194651 .

Hartvigsen J, Kamper SJ, French SD. Low-value care in musculoskeletal health care: is there a way forward? Pain Pract. 2022;22(S2):65–70. https://doi.org/10.1111/papr.13142 .

Darlow B. Beliefs about back pain: the confluence of client, clinician and community. Int J Osteopath Med. 2016;20:53–61. https://doi.org/10.1016/j.ijosm.2016.01.005 .

Chibnall JT, Tait RC, Andresen EM, Hadler NM. Race differences in diagnosis and surgery for occupational low back injuries. Spine (Phila Pa 1976). 2006;31(11):1272–5. https://doi.org/10.1097/01.brs.0000217584.79528.9b .

Christe G, Nzamba J, Desarzens L, Leuba A, Darlow B, Pichonnaz C. Physiotherapists’ attitudes and beliefs about low back pain influence their clinical decisions and advice. Musculoskelet Sci Pract. 2021;53:102382. https://doi.org/10.1016/j.msksp.2021.102382 .

Voerman J, Chomrikh L, Huygen F, Nederland A, Nvacp B, Nederland DO, et al. Patiënttevredenheid Bij Chronische pijn. Soest: SWP; 2015.

Google Scholar  

Smalbrugge M, Jongenelis LK, Pot AM, Beekman ATF, Eefsting JA. Pain among nursing home patients in the Netherlands: prevalence, course, clinical correlates, recognition and analgesic treatment – an observational cohort study. BMC Geriatr. 2007;7(1):3. https://doi.org/10.1186/1471-2318-7-3 .

van Herk R, Boerlage AA, van Dijk M, Baar FPM, Tibboel D, de Wit R. Pain management in Dutch nursing homes leaves much to be desired. Pain Manage Nurs. 2009;10(1):32–9. https://doi.org/10.1016/j.pmn.2008.06.003 .

Linton SJ, Vlaeyen J, Ostelo R. The back pain beliefs of health care providers: are we fear-avoidant? J Occup Rehabil. 2002;12(4):223–32. https://doi.org/10.1023/A:1020218422974 .

Gheldof EL, Vinck J, Vlaeyen JW, Hidding A, Crombez G. The differential role of pain, work characteristics and pain-related fear in explaining back pain and sick leave in occupational settings. Pain. 2005;113(1–2):71–81. https://doi.org/10.1016/j.pain.2004.09.040 .

Pain IAftSo. Declaration of Montreal: IASP Seattle; 2010. Available from: https://www.iasp-pain.org/advocacy/iasp-statements/access-to-pain-management-declaration-of-montreal/

Recommendations by the International Association for the Study of Pain. International Association for the Study of Pain (IASP). Available from: https://www.iasp-pain.org/advocacy/iasp-statements/desirable-characteristics-of-national-pain-strategies/ .  Accessed 20 May 2023.

Gardner T, Refshauge K, Smith L, McAuley J, Hübscher M, Goodall S. Physiotherapists’ beliefs and attitudes influence clinical practice in chronic low back pain: a systematic review of quantitative and qualitative studies. J Physiotherapy. 2017;63(3):132–43. https://doi.org/10.1016/j.jphys.2017.05.017 .

Darlow B, Fullen BM, Dean S, Hurley DA, Baxter GD, Dowell A. The association between health care professional attitudes and beliefs and the attitudes and beliefs, clinical management, and outcomes of patients with low back pain: a systematic review. Eur J Pain. 2012;16(1):3–17. https://doi.org/10.1016/j.ejpain.2011.06.006 .

Holden MA, Nicholls EE, Young J, Hay EM, Foster NE. UK-Based physical therapists’ attitudes and beliefs regarding Exercise and knee osteoarthritis: findings from a mixed-methods study. Arthritis Rheumatism-Arthritis Care Res. 2009;61(11):1511–21. https://doi.org/10.1002/art.24829 .

Zangoni G, Thomson OP. I need to do another course’ - Italian physiotherapists’ knowledge and beliefs when assessing psychosocial factors in patients presenting with chronic low back pain. Musculoskelet Sci Pract. 2017;27:71–7. https://doi.org/10.1016/j.msksp.2016.12.015 .

Synnott A, O’Keeffe M, Bunzli S, Dankaerts W, O’Sullivan P, O’Sullivan K. Physiotherapists may stigmatise or feel unprepared to treat people with low back pain and psychosocial factors that influence recovery: a systematic review. J Physiotherapy. 2015;61(2):68–76. https://doi.org/10.1016/j.jphys.2015.02.016 .

Richmond H, Hall AM, Hansen Z, Williamson E, Davies D, Lamb SE. Exploring physiotherapists’ experiences of implementing a cognitive behavioural approach for managing low back pain and identifying barriers to long-term implementation. Physiotherapy. 2018;104(1):107–15. https://doi.org/10.1016/j.physio.2017.03.007 .

Driver C, Kean B, Oprescu F, Lovell GP. Knowledge, behaviors, attitudes and beliefs of physiotherapists towards the use of psychological interventions in physiotherapy practice: a systematic review. Disabil Rehabil. 2017;39(22):2237–49. https://doi.org/10.1080/09638288.2016.1223176 .

Roussel NA, Neels H, Kuppens K, Leysen M, Kerckhofs E, Nijs J, et al. History taking by physiotherapists with low back pain patients: are illness perceptions addressed properly? Disabil Rehabil. 2016;38(13):1268–79. https://doi.org/10.3109/09638288.2015.1077530 .

Lugtenberg M, Burgers JS, Besters CF, Han D, Westert GP. Perceived barriers to guideline adherence: a survey among general practitioners. BMC Fam Pract. 2011;12(1):98. https://doi.org/10.1186/1471-2296-12-98 .

French SD, McKenzie JE, O’Connor DA, Grimshaw JM, Mortimer D, Francis JJ, et al. Evaluation of a theory-informed implementation intervention for the management of acute low back pain in general medical practice: the IMPLEMENT cluster randomised trial. PLoS One. 2013;8(6):e65471. https://doi.org/10.1371/journal.pone.0065471 .

Schectman JM, Schroth WS, Verme D, Voss JD. Randomized controlled trial of education and feedback for implementation of guidelines for acute low back pain. J Gen Intern Med. 2003;18(10):773–80. https://doi.org/10.1046/j.1525-1497.2003.10205.x .

Stevenson K, Lewis M, Hay E. Does physiotherapy management of low back pain change as a result of an evidence-based educational programme? J Eval Clin Pract. 2006;12(3):365–75. https://doi.org/10.1111/j.1365-2753.2006.00565.x .

Zhang C-H, Hsu L, Zou B-R, Li J-F, Wang H-Y, Huang J. Effects of a pain education program on nurses’ pain knowledge, attitudes and pain assessment practices in China. J Pain Symptom Manag. 2008;36(6):616–27. https://doi.org/10.1016/j.jpainsymman.2007.12.020 .

Jacobs CM, Guildford BJ, Travers W, Davies M, McCracken LM. Brief psychologically informed physiotherapy training is associated with changes in physiotherapists’ attitudes and beliefs towards working with people with chronic pain. Br J Pain. 2016;10(1):38–45. https://doi.org/10.1177/2049463715600460 .

Synnott A, O’Keeffe M, Bunzli S, Dankaerts W, O’Sullivan P, Robinson K, et al. Physiotherapists report improved understanding of and attitude toward the cognitive, psychological and social dimensions of chronic low back pain after cognitive functional therapy training: a qualitative study. J Physiotherapy. 2016;62(4):215–21. https://doi.org/10.1016/j.jphys.2016.08.002 .

Ghandehari OO, Hadjistavropoulos T, Williams J, Thorpe L, Alfano DP, Bello-Haas VD, et al. A controlled investigation of Continuing Pain Education for Long-Term Care Staff. Pain Res Manage. 2013;18(1):395481. https://doi.org/10.1155/2013/395481 .

Gaupp R, Walter M, Bader K, Benoy C, Lang UE. A two-Day Acceptance and Commitment Therapy (ACT) workshop increases presence and work functioning in healthcare workers. Front Psychiatry. 2020;11: 861. https://doi.org/10.3389/fpsyt.2020.00861 .

Achaliwie F, Wakefield AB, Mackintosh-Franklin C. Does Education improve nurses’ knowledge, attitudes, skills, and practice in relation to pain management? An integrative review. Pain Manage Nurs. 2023;24(3):273–9. https://doi.org/10.1016/j.pmn.2022.12.002 .

Petit A, Begue C, Richard I, Roquelaure Y. Factors influencing physiotherapists’ attitudes and beliefs toward chronic low back pain: impact of a care network belonging. Physiother Theory Pract. 2019;35(5):437–43. https://doi.org/10.1080/09593985.2018.1444119 .

Hammick M, Freeth D, Koppel I, Reeves S, Barr H. A best evidence systematic review of interprofessional education: BEME Guide 9. Med Teach. 2007;29(8):735–51. https://doi.org/10.1080/01421590701682576 .

Thompson K, Johnson MI, Milligan J, Briggs M. Twenty-five years of pain education research—what have we learned? Findings from a comprehensive scoping review of research into pre-registration pain education for health professionals. Pain. 2018;159(11):2146–58. https://doi.org/10.1097/j.pain.0000000000001352 .

Misra S, Harvey RH, Stokols D, Pine KH, Fuqua J, Shokair SM, et al. Evaluating an interdisciplinary undergraduate training program in health promotion research. Am J Prev Med. 2009;36(4):358–65. https://doi.org/10.1016/j.amepre.2008.11.014 .

Phillips AC, Lewis LK, McEvoy MP, Galipeau J, Glasziou P, Moher D, et al. Development and validation of the guideline for reporting evidence-based practice educational interventions and teaching (GREET). BMC Med Educ. 2016;16(1):237. https://doi.org/10.1186/s12909-016-0759-1 .

Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348: g1687. https://doi.org/10.1136/bmj.g1687 .

Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for reporting implementation studies (StaRI) Statement. BMJ. 2017;356: i6795. https://doi.org/10.1136/bmj.i6795 .

Devos Carl CA, Lefèvre M, Caroline O, Françoise R, Nicolas B, Sophie G. Maertens De Noordhout Charline, Devleesschauwer Brecht, Haelterman Margareta, Léonard Christian, Meeus Pascal. Performance of the Belgian health system – report 2019. Health Services Research (HSR): Belgian Health Care Knowledge Centre (KCE); 2019.

Federale Overheidsdienst Volksgezondheid, Veiligheid van de Voedselketen en Leefmilieu. Jaarstatistieken met betrekking tot de beoefenaars van gezondheidszorgberoepen in België, 2022. Brussels, May 2023. Available from: https://overlegorganen.gezondheid.belgie.be/nl/documenten/hwf-jaarstatistieken-2022 . Accessed 20 June 2023.

Steyaert A, Bischoff R, Feron J-M, Berquin A. The high burden of acute and chronic pain in general practice in French-Speaking Belgium. J pain Res. 2023;16:1441–51. https://doi.org/10.2147/JPR.S399037 .

Butler DS, Moseley GL. Explain Pain. 2nd ed. Adelaide: Noigroup publications; 2013.

Grol R. Beliefs and evidence in changing clinical practice. BMJ. 1997;315(7105):418–21. https://doi.org/10.1136/bmj.315.7105.418 .

Roschelle J, Penuel W, Shechtman N. Co-design of innovations with teachers: definition and dynamics. 2006. https://doi.org/10.22318/icls2006.606 .

Pallesen KS, Rogers L, Anjara S, De Brún A, McAuliffe E. A qualitative evaluation of participants’ experiences of using co-design to develop a collective leadership educational intervention for health-care teams. Health Expect. 2020;23(2):358–67. https://doi.org/10.1111/hex.13002 .

Grol Richard WM, Eccles M, Davis D. Improving Patient Care: The Implementation of Change in Health Care. 2nd ed. Chichester: Wiley; 2013.

Book   Google Scholar  

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. https://doi.org/10.1186/1748-5908-4-50 .

Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6(1): 42. https://doi.org/10.1186/1748-5908-6-42 .

Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, et al. A guide to using the theoretical domains framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12(1):77. https://doi.org/10.1186/s13012-017-0605-9 .

Kongsted A, Hartvigsen J, Boyle E, Ris I, Kjaer P, Thomassen L, et al. GLA:D® back: group-based patient education integrated with exercises to support self-management of persistent back pain — feasibility of implementing standardised care by a course for clinicians. Pilot Feasibility Stud. 2019;5(1):65. https://doi.org/10.1186/s40814-019-0448-z .

Schröder K, Öberg B, Enthoven P, Kongsted A, Abbott A. Confidence, attitudes, beliefs and determinants of implementation behaviours among physiotherapists towards clinical management of low back pain before and after implementation of the BetterBack model of care. BMC Health Serv Res. 2020;20(1):443. https://doi.org/10.1186/s12913-020-05197-3 .

Louw A, Diener I, Butler DS, Puentedura EJ. The effect of neuroscience education on pain, disability, anxiety, and stress in chronic musculoskeletal pain. Arch Phys Med Rehabil. 2011;92(12):2041–56. https://doi.org/10.1016/j.apmr.2011.07.198 .

Nijs J, Paul van Wilgen C, Van Oosterwijck J, van Ittersum M, Meeus M. How to explain central sensitization to patients with ‘unexplained’ chronic musculoskeletal pain: practice guidelines. Man Ther. 2011;16(5):413–8. https://doi.org/10.1016/j.math.2011.04.005 .

Van Oosterwijck J, Nijs J, Meeus M, Truijen S, Craps J, Van den Keybus N, et al. Pain neurophysiology education improves cognitions, pain thresholds, and movement performance in people with chronic whiplash: a pilot study. J Rehabil Res Dev. 2011;48(1):43–58. https://doi.org/10.1682/jrrd.2009.12.0206 .

Moseley GL. Evidence for a direct relationship between cognitive and physical change during an education intervention in people with chronic low back pain. Eur J Pain. 2004;8(1):39–45. https://doi.org/10.1016/S1090-3801(03)00063-6 .

Moseley GL. Widespread brain activity during an abdominal task markedly reduced after pain physiology education: fMRI evaluation of a single patient with chronic low back pain. Aust J Physiother. 2005;51(1):49–52. https://doi.org/10.1016/s0004-9514(05)70053-2 .

Moseley GL. Joining forces – combining cognition-targeted motor control training with group or individual pain physiology education: a successful treatment for chronic low back pain. J Man Manipulative Ther. 2003;11(2):88–94. https://doi.org/10.1179/106698103790826383 .

Moseley GL, Nicholas MK, Hodges PW. A randomized controlled trial of intensive neurophysiology education in chronic low back pain. Clin J Pain. 2004;20(5):324–30. https://doi.org/10.1097/00002508-200409000-00007 .

Moseley GL. Combined physiotherapy and education is efficacious for chronic low back pain. Aust J Physiother. 2002;48(4):297–302. https://doi.org/10.1016/s0004-9514(14)60169-0 .

Flodgren G, O’Brien MA, Parmelli E, Grimshaw JM. Local opinion leaders: effects on professional practice and healthcare outcomes. Cochrane Database Syst Reviews. 2019;6(6):CD000125. https://doi.org/10.1002/14651858.CD000125.pub5 .

Gardner B, Whittington C, McAteer J, Eccles MP, Michie S. Using theory to synthesise evidence from behaviour change interventions: the example of audit and feedback. Soc Sci Med. 2010;70(10):1618–25. https://doi.org/10.1016/j.socscimed.2010.01.039 .

Demmelmaier I, Denison E, Lindberg P, Åsenlöf P. Tailored skills training for practitioners to enhance assessment of prognostic factors for persistent and disabling back pain: four quasi-experimental single-subject studies. Physiother Theory Pract. 2012;28(5):359–72. https://doi.org/10.3109/09593985.2011.629022 .

Krause F, Schmalz G, Haak R, Rockenbauch K. The impact of expert- and peer feedback on communication skills of undergraduate dental students – a single-blinded, randomized, controlled clinical trial. Patient Educ Couns. 2017;100(12):2275–82. https://doi.org/10.1016/j.pec.2017.06.025 .

Nijs J, van Wilgen P. Pijneducatie: een praktische handleiding voor (para) medici. Houten: Bohn Stafleu van Loghum; 2010.

Foundation RP. Tired of waiting for pain to go away? Learn a science based approach to overcome chronic pain. Available from: https://www.retrainpain.org/ . Accessed 2 June 2023.

Beetsma AJ, Reezigt RR, Paap D, Reneman MF. Assessing future health care practitioners’ knowledge and attitudes of musculoskeletal pain; development and measurement properties of a new questionnaire. Musculoskelet Sci Pract. 2020;50(102236): 102236. https://doi.org/10.1016/j.msksp.2020.102236 .

Munneke W, De Kooning M, Nijs J, Leclercq J, George C, Roussel N, et al. Cross-cultural adaptation and psychometric testing of the French version of the knowledge and attitudes of Pain (KNAP) questionnaire. Annals Phys Rehabilitation Med. 2023;66(7):101757. https://doi.org/10.1016/j.rehab.2023.101757 .

Carr E. Barriers to effective pain management. J Perioper Pract. 2007;17(5):200–8. https://doi.org/10.1177/175045890701700502 .

Berquin A, Faymonville M, Deseure K, Van Liefferinge A, Celentano J, Crombez G, et al. Aanpak van chronische pijn in België: Verleden, heden en toekomst. Federale Overheidsdienst Volksgezondheid, Veiligheid van de Voedselketen en Leefmilieu. 2011. Available from https://www.health.belgium.be/nl/rapport-aanpak-van-chronische-pijn-belgie . Accessed 20 Apr 2024.

Brockopp DY, Brockopp G, Warden S, Wilson J, Carpenter JS, Vandeveer B. Barriers to change: a pain management project. Int J Nurs Stud. 1998;35(4):226–32. https://doi.org/10.1016/S0020-7489(98)00035-2 .

Fritz J, Söderbäck M, Söderlund A, Sandborgh M. The complexity of integrating a behavioral medicine approach into physiotherapy clinical practice. Physiother Theory Pract. 2019;35(12):1182–93. https://doi.org/10.1080/09593985.2018.1476996 .

Cowell I, O’Sullivan P, O’Sullivan K, Poyton R, McGregor A, Murtagh G. Perceptions of physiotherapists towards the management of non-specific chronic low back pain from a biopsychosocial perspective: a qualitative study. Musculoskelet Sci Pract. 2018;38:113–9. https://doi.org/10.1016/j.msksp.2018.10.006 .

Matthews J, Hall AM, Hernon M, Murray A, Jackson B, Taylor I, et al. A brief report on the development of a theoretically-grounded intervention to promote patient autonomy and self-management of physiotherapy patients: face validity and feasibility of implementation. BMC Health Serv Res. 2015;15(1):260. https://doi.org/10.1186/s12913-015-0921-1 .

Park J, Hirz CE, Manotas K, Hooyman N. Nonpharmacological pain management by ethnically diverse older adults with chronic pain: barriers and facilitators. J Gerontol Soc Work. 2013;56(6):487–508. https://doi.org/10.1080/01634372.2013.808725 .

Landes SJ, McBain SA, Curran GM. An introduction to effectiveness-implementation hybrid designs. Psychiatry Res. 2019;280: 112513. https://doi.org/10.1016/j.psychres.2019.112513 .

Slater H, Jordan JE, O’Sullivan PB, Schütze R, Goucke R, Chua J, et al. Listen to me, learn from me: a priority setting partnership for shaping interdisciplinary pain training to strengthen chronic pain care. Pain. 2022;163(11). https://doi.org/10.1097/j.pain.0000000000002647 .

Machira G, Kariuki H, Martindale L. Impact of an educational pain management programme on nurses’ pain knowledge and attitudes in Kenya. Int J Palliat Nurs. 2013;19(7):341–5. https://doi.org/10.12968/ijpn.2013.19.7.341 .

Kongsted A, Ris I, Kjaer P, Vach W, Morsø L, Hartvigsen J. GLA:D® back: implementation of group-based patient education integrated with exercises to support self-management of back pain - protocol for a hybrid effectiveness-implementation study. BMC Musculoskelet Disord. 2019;20(1):85. https://doi.org/10.1186/s12891-019-2443-1 .

Sheldon LK. Communication in oncology care: the effectiveness of skills training workshops for healthcare providers. Clin J Oncol Nurs. 2005;9(3):305. https://doi.org/10.1188/05.CJON.305-312 .

Ris I, Boyle E, Myburgh C, Hartvigsen J, Thomassen L, Kongsted A. Factors influencing implementation of the GLA:D back, an educational/exercise intervention for low back pain: a mixed-methods study. JBI Evid Implement. 2021;19(4):394–408. https://doi.org/10.1097/xeb.0000000000000284 .

Noesgaard SS, Ørngreen R. The effectiveness of e-learning: an explorative and integrative review of the definitions, methodologies and factors that promote e-learning effectiveness. Electron J E-learning. 2015;13(4):278–90.

Fewster-Thuente L, Velsor-Friedrich B. Interdisciplinary collaboration for healthcare professionals. Nurs Adm Q. 2008;32(1):40–8. https://doi.org/10.1097/01.NAQ.0000305946.31193.61 .

Wilson T, Mires G. A comparison of performance by medical and midwifery students in multiprofessional teaching. Med Educ. 2000;34(9):744–6. https://doi.org/10.1046/j.1365-2923.2000.00619.x .

Download references

Acknowledgements

The authors would like to acknowledge all members of the expert panel and patient panel for their valuable contributions.

We are grateful for Jean-Philippe Agelas, Leen Vermeulen, Veerle Van Hoestenberghe, Imane Hafid, Lisa Ortscheid, Margaux Aron, Sofie Habets, and Yannick Depas for providing the courses and sharing your experiences to update the training program.

Special thanks to all organisations that collaborated to successfully implement the study in Antwerp, Brussels, Ghent, Namur and Liege.

We would like to thank the members of the guidance committee of the implementation study for successfully guiding the implementation study.

We are also thankful for Matijs van den Eijnden for providing input to apply motivational interviewing in our training program.

Finally, we would like to express our appreciation to all HCPs who participated in the study.

This study was funded by the Belgian Federal Public Service of Health, Food Chain Safety and Environment, EBP/DC/NYU/2019/01. The authors declare that they have no competing interests.

Author information

Authors and affiliations.

Department of Physiotherapy, Human Physiology and Anatomy, Faculty of Physical Education and Physiotherapy, Vrije Universiteit Brussel, Brussels, Belgium

Wouter Munneke, Jo Nijs & Margot De Kooning

Pain in Motion International Research Group (PiM) https://paininmotion.be/

Wouter Munneke, Jo Nijs, Mira Meeus & Margot De Kooning

Department of Sport and Rehabilitation Sciences, University of Liège, Liege, Belgium

Wouter Munneke & Christophe Demoulin

Department of Health and Rehabilitation, Unit of Physiotherapy, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden

Department of rehabilitation medicine and physiotherapy, University Hospital Brussels, Brussels, Belgium

Société Scientifique de Médecine Générale (SSMG), Brussels, Belgium

Carine Morin

Domus Medica, Antwerp, Belgium

Department of Physical and Rehabilitation Medicine, Cliniques universitaires Saint-Luc, Brussels, Belgium

Anne Berquin

MOVANT research group, Department of Rehabilitation Sciences and Physiotherapy, Faculty of Health Sciences and Medicine, University of Antwerp, Antwerp, Belgium

You can also search for this author in PubMed   Google Scholar

Contributions

CD, JN, AB, MM and MDK wrote the original study plan, applied and received the funding for the implementation study. WM conducted the expert panel meetings, data collection and analysed the data under supervision from MDKWM, CD, JN and MDK developed the training program with support from all authorsCD and CM translated the training program into French WM, CD, AB, EK, CM and MDK carried out the implementation WM and MDK wrote the manuscript with support from all authors. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Margot De Kooning .

Ethics declarations

Ethics approval and consent to participate.

This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the Medical Ethical Committee (EC-2021-327) linked to the University Hospital of Brussels, Brussels, Belgium. All participants provided informed consent.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Munneke, W., Demoulin, C., Nijs, J. et al. Development of an interdisciplinary training program about chronic pain management with a cognitive behavioural approach for healthcare professionals: part of a hybrid effectiveness-implementation study. BMC Med Educ 24 , 331 (2024). https://doi.org/10.1186/s12909-024-05308-2

Download citation

Received : 23 August 2023

Accepted : 13 March 2024

Published : 22 March 2024

DOI : https://doi.org/10.1186/s12909-024-05308-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Persistent pain
  • Implementation
  • Health personnel
  • Communication
  • Health planning
  • Expert Opinion
  • Behavior and behavior mechanisms

BMC Medical Education

ISSN: 1472-6920

techniques used in clinical research

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How Does Strengths-Based Therapy Work?

Why it can help to focus on your abilities rather than deficits

Dr. Amy Marschall is an autistic clinical psychologist with ADHD, working with children and adolescents who also identify with these neurotypes among others. She is certified in TF-CBT and telemental health.

techniques used in clinical research

Dr. Sabrina Romanoff, PsyD, is a licensed clinical psychologist and a professor at Yeshiva University’s clinical psychology doctoral program.

techniques used in clinical research

Renata Angerami / Getty Images

  • Effectiveness
  • Getting Started

Strengths-based therapy is an approach to psychotherapy treatment based in positive psychology . It focuses on a person's existing resources, resilience, and positive qualities, then uses these abilities to improve their quality of life and reduce problematic symptoms.

Learning the goals and techniques of this type of therapy can help you know what to expect. It's also beneficial to understand what conditions strengths-based therapy can help with, the benefits it provides, and what research says about its effectiveness so you know whether it may be a good option for you.

Goal of Strengths-Based Therapy

Strengths-based therapy aims to improve a person's mindset and instill a positive worldview. It enables them to see themselves as resourceful and resilient when experiencing adverse conditions or hardships.

What makes this approach different is its focus on identifying factors that might be holding back a person’s growth. It empowers participants to be an agent of change by creating an environment that promotes the change they need or desire.

Strengths-Based Therapy Techniques

In all therapeutic approaches, the therapist chooses their techniques based on the client’s unique needs. Techniques a strengths-based therapist might use in their sessions include:

  • Reviewing a strengths list : The therapist provides a list of strengths with definitions and works with the client to identify which strengths apply to them. Clients can use the list as a starting point to identify their strengths.
  • Asking open-ended questions : This technique involves asking questions such as, “What are you good at?” Asking open-ended questions is less structured, allowing the client to identify strengths that might not have been included on the list.
  • Reframing weaknesses : A strengths-based therapist encourages the client to examine their weaknesses and identify ways these qualities can be reframed as strengths . For example, someone who worries about what others are thinking might be very compassionate and caring. This is used as a starting point to identify how these behaviors or qualities can be used to improve the client’s mindset or quality of life.
  • Strengths journaling : When using this technique, the therapist asks the client to keep a journal tracking their strengths. This helps them learn to identify their strengths, and also identify situations where the strengths benefit them. Journaling improves mindfulness and helps the client notice existing strengths in their daily life.
  • Maximizing strengths usage : A strengths-based therapist may also ask questions to help the client identify how and when a strength benefits them to help channel and use that strength in the most effective way possible. These questions can include things like, “When is a time you could have used this strength?” or “When is a time that you relied too heavily on this strength?”

When Strengths-Based Therapy Can Help

Strengths-based therapy can be helpful for many different concerns. It can help boost self-esteem and confidence, for instance, with some evidence that this approach can be beneficial for individuals with depression or anxiety .

Strengths-based therapy can also help individuals recovering from trauma . Building resilience and improving worldview can help alleviate many symptoms associated with these diagnoses.

Couples and families can benefit from a strengths-based treatment as it helps reframe challenges and boost healthy communication skills . Individuals learn to recognize how their strengths contribute to the relationship while beginning to identify their partner or family member’s strengths as well.

Strengths-based therapy could also help teens with identity development and insight. For similar reasons, it also benefits career counseling and determining what kinds of jobs might be a good fit for an individual.

Strengths-based therapy may be utilized in conjunction with other types of therapy, including cognitive-behavioral therapy , humanistic therapy , and narrative therapy . Therapists might also use a strengths-based approach if they are engaging in solution-focused therapy , brief motivational interviewing , or interpersonal therapy .

Benefits of Strengths-Based Therapy

Many people find strengths-based therapy beneficial in their mental health journeys. One reason is that positive psychology changes the traditional therapy narrative from “What do we need to fix about you?” to “What is the good that is already in you, and how can we bring that out?”

Strengths-based therapy also helps build resilience by identifying strengths you've used in the past but might not have defined as strengths at the time. The approach teaches you that you are already strong, and you already have the skills needed to survive. You simply need to learn how to tap into those skills and use them intentionally in your life.

Individuals enter strengths-based therapy with strengths; the therapy simply helps amplify these strengths so they're used for the person's maximum benefit.

Effectiveness of Strengths-Based Therapy

Research surrounding strengths-based therapy has shown that it is an effective treatment for a variety of conditions, including depression and trauma. It is also beneficial as an early intervention for serious mental health issues , such as psychosis.

Although people of all ages can benefit from this approach, teenagers in particular often find strengths-based therapy effective. This is partially because of focusing on the development and utilization of resilient beliefs and behaviors instead of identifying and challenging cognitive distortions.

Strengths-based therapy can be effective for both in-person and telehealth therapy sessions . This enables participants to choose which method works best for them.

Strengths-Based Therapy Criticisms

As with all therapeutic approaches, strengths-based therapy is not an ideal fit for everyone. Critics of this therapy type include:

  • The approach lends itself to toxic positivity or focusing so intensely on a positive mindset that there is no space left for negative emotions or thoughts.
  • Some weaknesses might not be strengths in disguise , and clients might feel invalidated if the therapist suggests otherwise.
  • Strengths-based therapy emphasizes qualities and skills that are already present , so individuals looking to make changes in their lives might not find this intervention helpful.

If this approach is not the right fit for you, that is okay. There are many therapeutic options that can help improve mental health.

How to Get Started With Strengths-Based Therapy

If you feel that a strengths-based approach may benefit you and you are currently in therapy, you can ask your therapist if they are familiar with this approach. Talk about whether these interventions might be a good fit for you. Inquire whether they offer strengths-based therapy and, if not, ask if they could refer you to someone who does.

If you do not already have a therapist, you can also search for a therapist who specializes in this approach. Therapists with training in a strengths-based approach will often indicate this on their website or profile.

During intake, the therapist will gather information about your history and symptoms. They might also have you complete a strengths-based assessment to gather more information. Then, you and your therapist will work together to create a treatment plan that focuses on your strengths and positive qualities.

Get Help Now

We've tried, tested, and written unbiased reviews of the best online therapy programs including Talkspace, BetterHelp, and ReGain. Find out which option is the best for you.

Victor PP, Teismann T, Willutzki U. A pilot evaluation of a strengths-based CBT intervention module with college students.   Behav Cogn Psychother . 2017;45(4):427-431. doi:10.1017/S1352465816000552

Gabana N. A strengths-based cognitive behavioral approach to treating depression and building resilience in collegiate athletics: the individuation of an identical twin.   Case Stud Sport Exerc Psychol . 2017;1(1):4-15. doi:10.1123/cssep.2016-0005

Block AM, Aizenman L, Saad A, et al. Peer support groups: evaluating a culturally grounded, strengths-based approach for work with refugees .  ASW . 2018;18(3):930-948. doi:10.18060/21634

Allott K, Steele P, Boyer F, et al. Cognitive strengths-based assessment and intervention in first-episode psychosis: A complementary approach to addressing functional recovery?   Clinical Psychology Review . 2020;79:101871. doi:10.1016/j.cpr.2020.101871

Yuen E, Sadhu J, Pfeffer C, et al. Accentuate the positive: Strengths-based therapy for adolescents . Adolesc Psychiatry . 2020;10(3):166-171. doi:10.2174/2210676610666200225105529

Victor P, Krug I, Vehoff C, Lyons N, Willutzki U. Strengths-based cbt: internet-based versus face-to-face therapy in a randomized controlled trial .  J Depress Anxiety . 2018;07(02):1000301. doi:10.4172/2167-1044.1000301

By Amy Marschall, PsyD Dr. Amy Marschall is an autistic clinical psychologist with ADHD, working with children and adolescents who also identify with these neurotypes among others. She is certified in TF-CBT and telemental health.

techniques used in clinical research

Study record managers: refer to the Data Element Definitions if submitting registration or results information.

Search for terms

ClinicalTrials.gov

  • Advanced Search
  • See Studies by Topic
  • See Studies on Map
  • How to Search
  • How to Use Search Results
  • How to Find Results of Studies
  • How to Read a Study Record

About Studies Menu

  • Learn About Studies
  • Other Sites About Studies
  • Glossary of Common Site Terms

Submit Studies Menu

  • Submit Studies to ClinicalTrials.gov PRS
  • Why Should I Register and Submit Results?
  • FDAAA 801 and the Final Rule
  • How to Apply for a PRS Account
  • How to Register Your Study
  • How to Edit Your Study Record
  • How to Submit Your Results
  • Frequently Asked Questions
  • Support Materials
  • Training Materials

Resources Menu

  • Selected Publications
  • Clinical Alerts and Advisories
  • Trends, Charts, and Maps
  • Downloading Content for Analysis

About Site Menu

  • ClinicalTrials.gov Background
  • About the Results Database
  • History, Policies, and Laws
  • ClinicalTrials.gov Modernization
  • Media/Press Resources
  • Linking to This Site
  • Terms and Conditions
  • Search Results
  • Study Record Detail

Maximum Saved Studies Reached

Use of Low-cost Molecular Diagnostic Techniques as a New Surveillance Model for Diseases Preventable by Vaccinations.

  • Study Details
  • Tabular View
  • No Results Posted

sections

Molecular surveillance data analysis of pertussis, respiratory virus disease, invasive bacterial diseases. We will measure the incidence rate of predictable disease with vaccination, stratifying by biological sex, age, other risk factors, and severity of infection during the project.

(IBD) due to capsule bacteria (Streptococcus pneumoniae, Neisseria meningitidis, Haemophilus influenzae) and other vaccine-preventable infections (e.g., viral flu)

Inclusion Criteria:

  • Subjects aged 0-18 years with infectious disease (bacterial or viral) diagnosed by molecular techniques at the AOU Meyer Immunology Laboratory
  • Obtaining informed consent

Exclusion Criteria:

  • Individuals with concomitant diseases such as cystic fibrosis, known immunodeficiency, or with infection suspected nosocomial (hospitalization or hospital admission within 15 days prior to the onset of symptoms) will be excluded from the study
  • For Patients and Families
  • For Researchers
  • For Study Record Managers
  • Customer Support
  • Accessibility
  • Viewers and Players
  • Freedom of Information Act
  • HHS Vulnerability Disclosure
  • U.S. National Library of Medicine
  • U.S. National Institutes of Health
  • U.S. Department of Health and Human Services

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Emerg (Tehran)
  • v.5(1); 2017

Logo of emergency

Sampling methods in Clinical Research; an Educational Review

Mohamed elfil.

1 Faculty of Medicine, Alexandria University, Egypt.

Ahmed Negida

2 Faculty of Medicine, Zagazig University, Egypt.

Clinical research usually involves patients with a certain disease or a condition. The generalizability of clinical research findings is based on multiple factors related to the internal and external validity of the research methods. The main methodological issue that influences the generalizability of clinical research findings is the sampling method. In this educational article, we are explaining the different sampling methods in clinical research.

Introduction

In clinical research, we define the population as a group of people who share a common character or a condition, usually the disease. If we are conducting a study on patients with ischemic stroke, it will be difficult to include the whole population of ischemic stroke all over the world. It is difficult to locate the whole population everywhere and to have access to all the population. Therefore, the practical approach in clinical research is to include a part of this population, called “sample population”. The whole population is sometimes called “target population” while the sample population is called “study population. When doing a research study, we should consider the sample to be representative to the target population, as much as possible, with the least possible error and without substitution or incompleteness. The process of selecting a sample population from the target population is called the “sampling method”.

Sampling types

There are two major categories of sampling methods ( figure 1 ): 1; probability sampling methods where all subjects in the target population have equal chances to be selected in the sample [ 1 , 2 ] and 2; non-probability sampling methods where the sample population is selected in a non-systematic process that does not guarantee equal chances for each subject in the target population [ 2 , 3 ]. Samples which were selected using probability sampling methods are more representatives of the target population.

An external file that holds a picture, illustration, etc.
Object name is emerg-5-e52-g001.jpg

Sampling methods.

Probability sampling method

Simple random sampling

This method is used when the whole population is accessible and the investigators have a list of all subjects in this target population. The list of all subjects in this population is called the “sampling frame”. From this list, we draw a random sample using lottery method or using a computer generated random list [ 4 ].

Stratified random sampling

This method is a modification of the simple random sampling therefore, it requires the condition of sampling frame being available, as well. However, in this method, the whole population is divided into homogeneous strata or subgroups according a demographic factor (e.g. gender, age, religion, socio-economic level, education, or diagnosis etc.). Then, the researchers select draw a random sample from the different strata [ 3 , 4 ]. The advantages of this method are: (1) it allows researchers to obtain an effect size from each strata separately, as if it was a different study. Therefore, the between group differences become apparent, and (2) it allows obtaining samples from minority/under-represented populations. If the researchers used the simple random sampling, the minority population will remain underrepresented in the sample, as well. Simply, because the simple random method usually represents the whole target population. In such case, investigators can better use the stratified random sample to obtain adequate samples from all strata in the population.

Systematic random sampling (Interval sampling)

In this method, the investigators select subjects to be included in the sample based on a systematic rule, using a fixed interval. For example: If the rule is to include the last patient from every 5 patients. We will include patients with these numbers (5, 10, 15, 20, 25, ...etc.). In some situations, it is not necessary to have the sampling frame if there is a specific hospital or center which the patients are visiting regularly. In this case, the researcher can start randomly and then systemically chooses next patients using a fixed interval [ 4 ].

Cluster sampling (Multistage sampling)

It is used when creating a sampling frame is nearly impossible due to the large size of the population. In this method, the population is divided by geographic location into clusters. A list of all clusters is made and investigators draw a random number of clusters to be included. Then, they list all individuals within these clusters, and run another turn of random selection to get a final random sample exactly as simple random sampling. This method is called multistage because the selection passed with two stages: firstly, the selection of eligible clusters, then, the selection of sample from individuals of these clusters. An example for this, if we are conducting a research project on primary school students from Iran. It will be very difficult to get a list of all primary school students all over the country. In this case, a list of primary schools is made and the researcher randomly picks up a number of schools, then pick a random sample from the eligible schools [ 3 ].

Non-probability sampling method

Convenience sampling

Although it is a non-probability sampling method, it is the most applicable and widely used method in clinical research. In this method, the investigators enroll subjects according to their availability and accessibility. Therefore, this method is quick, inexpensive, and convenient. It is called convenient sampling as the researcher selects the sample elements according to their convenient accessibility and proximity [ 3 , 6 ]. For example: assume that we will perform a cohort study on Egyptian patients with Hepatitis C (HCV) virus. The convenience sample here will be confined to the accessible population for the research team. Accessible population are HCV patients attending in Zagazig University Hospital and Cairo University Hospitals. Therefore, within the study period, all patients attending these two hospitals and meet the eligibility criteria will be included in this study.

Judgmental sampling

In this method, the subjects are selected by the choice of the investigators. The researcher assumes specific characteristics for the sample (e.g. male/female ratio = 2/1) and therefore, they judge the sample to be suitable for representing the population. This method is widely criticized due to the likelihood of bias by investigator judgement [ 5 ].

Snow-ball sampling

This method is used when the population cannot be located in a specific place and therefore, it is different to access this population. In this method, the investigator asks each subject to give him access to his colleagues from the same population. This situation is common in social science research, for example, if we running a survey on street children, there will be no list with the homeless children and it will be difficult to locate this population in one place e.g. a school/hospital. Here, the investigators will deliver the survey to one child then, ask him to take them to his colleagues or deliver the surveys to them.

Conflict of interest:

IMAGES

  1. Types of Primary Medical Research

    techniques used in clinical research

  2. Clinical Research

    techniques used in clinical research

  3. What Is Clinical Research?

    techniques used in clinical research

  4. Certified Workshop on Clinical Research & Methodology

    techniques used in clinical research

  5. 15 Research Methodology Examples (2023)

    techniques used in clinical research

  6. 5 Essential Clinical Laboratory Techniques to Ensure Quality Results

    techniques used in clinical research

VIDEO

  1. terms used in clinical Anatomy

  2. Clinical Research course I Clinical Research Program I Online Course in Clinical Research

  3. Neuroscientist: Do this to cure your anxiety…

  4. Why Clinical Research Funding Matters 🤷🏼‍♀️💰

  5. EVERYTHING CLINICAL RESEARCH

  6. How Clinical Research Transforms Patient Care 🤗😃

COMMENTS

  1. Methodology for clinical research

    Further classification of clinical research methods may be based on data collection techniques and the direction of causality being investigated, as illustrated for example by time relationships. Clinical research can be classified as either descriptive or analytical, as illustrated in Figure 2 [ 9 , 12 , 17-20 ].

  2. Planning and Conducting Clinical Research: The Whole Process

    Clinical research in this review refers to scientific research related to clinical practices. There are many ways a clinical research's findings can become invalid or less impactful including ignorance of previous similar studies, ... The statistical methods used to produce the results should be explicitly explained. Many different statistical ...

  3. A tutorial on methodological studies: the what, when, how and why

    Background Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. Main body We provide an overview of some of the key aspects of ...

  4. Clinical Research What is It

    Clinical research is the comprehensive study of the safety and effectiveness of the most promising advances in patient care. Clinical research is different than laboratory research. It involves people who volunteer to help us better understand medicine and health. Lab research generally does not involve people — although it helps us learn ...

  5. PDF HEALTH RESEARCH METHODOLOGY

    Health research methodology: A guide for training in research methods Chapter 1 Research and Scientific Methods 1.1 Definition Research is a quest for knowledge through diligent search or investigation or experimentation aimed at the discovery and interpretation of new knowledge. Scientific method is a systematic

  6. An overview of commonly used statistical methods in clinical research

    In order to interpret research datasets, clinicians involved in clinical research should have an understanding of statistical methodology. This article provides a brief overview of statistical methods that are frequently used in clinical research studies. Descriptive and inferential methods, including regression modeling and propensity scores ...

  7. An overview of commonly used statistical methods in clinical research

    In order to interpret research datasets, clinicians involved in clinical research should have an understanding of statistical methodology. This article provides a brief overview of statistical methods that are frequently used in clinical research studies. Descriptive and inferential methods, including regression modeling and propensity scores ...

  8. Clinical research study designs: The essentials

    Introduction. In clinical research, our aim is to design a study, which would be able to derive a valid and meaningful scientific conclusion using appropriate statistical methods that can be translated to the "real world" setting. 1 Before choosing a study design, one must establish aims and objectives of the study, and choose an appropriate target population that is most representative of ...

  9. How to do high-quality clinical research 1: First steps

    Abstract. This is the first paper in a series of five on how to do good quality clinical research. It sets the scene for the four papers that follow. The aims of the series are to: promote reliable clinical research to inform clinical practice; help people new to research to get started (at any stage of their career); create teaching resources ...

  10. Home

    Experiments, including clinical trials, differ considerably in the methods used to assign participants to study conditions (or study arms) and to deliver interventions to those participants. This section provides information related to the design and analysis of experiments in which (1) participants are assigned in groups (or clusters) and ...

  11. What Are the Different Types of Clinical Research?

    Below are descriptions of some different kinds of clinical research. Treatment Research generally involves an intervention such as medication, psychotherapy, new devices, or new approaches to ...

  12. Clinical research methods for treatment, diagnosis, prognosis, etiology

    This narrative review is an introduction for health professionals on how to conduct and report clinical research on six categories: treatment, diagnosis/differential diagnosis, prognosis, etiology, screening, and prevention. The importance of beginning with an appropriate clinical question and the e …

  13. Research methods for the clinical surgeon

    Abstract. Research methods describe the tools, processes and techniques used to answer a research question. It is vitally important for the clinical surgeon to understand research methods and methodology to effectively appraise and design research studies. This article outlines some of the different research methods commonly used in clinical ...

  14. Digital and precision clinical trials: innovations for testing mental

    One reason for this gap is the traditional techniques used in mental health clinical trials, which slow the pace of progress, produce inequities in care, and undermine precision medicine goals.

  15. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  16. Clinical Trial Terms and Methods

    Blinding: a method used in certain types of clinical trials that keeps study participants from knowing what study treatment they are receiving. This is called single-blinding. 8. Diagnostic Trials: a research study that evaluates methods of detecting disease. 9. Dose-Ranging Study: a type of clinical trial where different doses of a drug are ...

  17. Taking Advantage of New Tools and Techniques

    As with virtually every scientific endeavor, clinical effectiveness research can be improved and expedited through innovation. In this case, innovation means the better use of existing tools and techniques as well as the development of entirely new methods and approaches. Understanding these emerging tools and techniques is critical to the discussion of improvements to the clinical ...

  18. Qualitative Research Methods in Mental Health

    As the evidence base for the study of mental health problems develops, there is a need for increasingly rigorous and systematic research methodologies. Complex questions require complex methodological approaches. Recognising this, the MRC guidelines for developing and testing complex interventions place qualitative methods as integral to each stage of intervention development and implementation.

  19. Clinical Research Techniques

    Clinical research techniques commonly used in our research facility: Blood Pressure (BP) Semi-automated oscillometric BP Auscultatory BP Exercise BP 24 h ambulatory blood pressure monitoring Portable, automated blood pressure monitors can be used to record blood pressure over a set-period of time (usually 24 hours), with chosen time-intervals. Output includes systolic, diastolic, mean, and ...

  20. The Use of Historical Controls in Clinical Trials

    A randomized clinical trial (RCT) is frequently the preferred research design for testing new medical treatments. Randomization helps to ensure that the participants in the treatment groups are similar in the distribution of prognostic factors. 1 This minimizes bias in statistical comparisons of patient outcomes and allows differences to be interpreted as the causal effect of treatment ...

  21. Research Guides: Evidence-Based Practice: EBP: Principles

    Clinical expertise refers to the clinician's cumulated experience, education, and clinical skills. The patient brings to the encounter his or her own personal and unique concerns, expectations, and values. The best evidence is usually found in clinically relevant research that has been conducted using sound methodology (Sackett, 2002).

  22. Effect of guided implant placement learning experiences on freehand

    Clinical and Experimental Dental Research is an open access dentistry journal publishing clinical, diagnostic & experimental work within oral medicine and dentistry. ... Use of guided surgical techniques may also improve the haptic feedback, perception for spatial distribution, and can aid in development of hand-eye coordination necessary for ...

  23. Connecting Chronic Pain and Opioid Use Disorder Clinical Trials Through

    The National Institutes of Health (NIH)'s development and support of the NIH HEAL Integrative Management of chronic Pain and OUD for Whole Recovery (IMPOWR) network 1 reflects the growing understanding of the potential interconnections of chronic pain (CP) and opioid use disorder (OUD). 2 The vision of the NIH HEAL IMPOWR network was to develop integrated treatment pathways across multiple ...

  24. Commonly Utilized Data Collection Approaches in Clinical Research

    Abstract. In this article we provide an overview of the different data collection approaches that are commonly utilized in carrying out clinical, public health, and translational research. We discuss several of the factors researchers need to consider in using data collected in questionnaire surveys, from proxy informants, through the review of ...

  25. Research Initiative in Clinical Economics

    Methods Research; Program Focus. Clinical economics studies describe and quantify the relative value of health care services. This research combines information on health outcomes of an intervention—usually its impact on the quality as well as length of life—with information on its cost, for use by decisionmakers in clinical, health system ...

  26. Development of an interdisciplinary training program about chronic pain

    Background Many applied postgraduate pain training programs are monodisciplinary, whereas interdisciplinary training programs potentially improve interdisciplinary collaboration, which is favourable for managing patients with chronic pain. However, limited research exists on the development and impact of interdisciplinary training programs, particularly in the context of chronic pain. Methods ...

  27. Strengths-Based Therapy: Definition and Techniques

    Learn the benefits of strengths-based therapy and when it might be used. ... Learning the goals and techniques of this type of therapy can help you know what to expect. It's also beneficial to understand what conditions strengths-based therapy can help with, the benefits it provides, and what research says about its effectiveness so you know ...

  28. Design, data analysis and sampling techniques for clinical research

    Design, data analysis and sampling techniques for clinical research - PMC. Journal List. Ann Indian Acad Neurol. v.14 (4); Oct-Dec 2011. PMC3271469. As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health.

  29. Use of Low-cost Molecular Diagnostic Techniques as a New Surveillance

    For example the limited use of low-cost high-sensitivity techniques such as real-time PCR, which could, if more widely used, improve pathogen identification with 3 times the sensitivity of standard cultural methods. ... you or your doctor may contact the study research staff using the contacts provided below. For general information, Learn ...

  30. Sampling methods in Clinical Research; an Educational Review

    Sampling types. There are two major categories of sampling methods ( figure 1 ): 1; probability sampling methods where all subjects in the target population have equal chances to be selected in the sample [ 1, 2] and 2; non-probability sampling methods where the sample population is selected in a non-systematic process that does not guarantee ...