How to Write Critical Reviews

When you are asked to write a critical review of a book or article, you will need to identify, summarize, and evaluate the ideas and information the author has presented. In other words, you will be examining another person’s thoughts on a topic from your point of view.

Your stand must go beyond your “gut reaction” to the work and be based on your knowledge (readings, lecture, experience) of the topic as well as on factors such as criteria stated in your assignment or discussed by you and your instructor.

Make your stand clear at the beginning of your review, in your evaluations of specific parts, and in your concluding commentary.

Remember that your goal should be to make a few key points about the book or article, not to discuss everything the author writes.

Understanding the Assignment

To write a good critical review, you will have to engage in the mental processes of analyzing (taking apart) the work–deciding what its major components are and determining how these parts (i.e., paragraphs, sections, or chapters) contribute to the work as a whole.

Analyzing the work will help you focus on how and why the author makes certain points and prevent you from merely summarizing what the author says. Assuming the role of an analytical reader will also help you to determine whether or not the author fulfills the stated purpose of the book or article and enhances your understanding or knowledge of a particular topic.

Be sure to read your assignment thoroughly before you read the article or book. Your instructor may have included specific guidelines for you to follow. Keeping these guidelines in mind as you read the article or book can really help you write your paper!

Also, note where the work connects with what you’ve studied in the course. You can make the most efficient use of your reading and notetaking time if you are an active reader; that is, keep relevant questions in mind and jot down page numbers as well as your responses to ideas that appear to be significant as you read.

Please note: The length of your introduction and overview, the number of points you choose to review, and the length of your conclusion should be proportionate to the page limit stated in your assignment and should reflect the complexity of the material being reviewed as well as the expectations of your reader.

Write the introduction

Below are a few guidelines to help you write the introduction to your critical review.

Introduce your review appropriately

Begin your review with an introduction appropriate to your assignment.

If your assignment asks you to review only one book and not to use outside sources, your introduction will focus on identifying the author, the title, the main topic or issue presented in the book, and the author’s purpose in writing the book.

If your assignment asks you to review the book as it relates to issues or themes discussed in the course, or to review two or more books on the same topic, your introduction must also encompass those expectations.

Explain relationships

For example, before you can review two books on a topic, you must explain to your reader in your introduction how they are related to one another.

Within this shared context (or under this “umbrella”) you can then review comparable aspects of both books, pointing out where the authors agree and differ.

In other words, the more complicated your assignment is, the more your introduction must accomplish.

Finally, the introduction to a book review is always the place for you to establish your position as the reviewer (your thesis about the author’s thesis).

As you write, consider the following questions:

  • Is the book a memoir, a treatise, a collection of facts, an extended argument, etc.? Is the article a documentary, a write-up of primary research, a position paper, etc.?
  • Who is the author? What does the preface or foreword tell you about the author’s purpose, background, and credentials? What is the author’s approach to the topic (as a journalist? a historian? a researcher?)?
  • What is the main topic or problem addressed? How does the work relate to a discipline, to a profession, to a particular audience, or to other works on the topic?
  • What is your critical evaluation of the work (your thesis)? Why have you taken that position? What criteria are you basing your position on?

Provide an overview

In your introduction, you will also want to provide an overview. An overview supplies your reader with certain general information not appropriate for including in the introduction but necessary to understanding the body of the review.

Generally, an overview describes your book’s division into chapters, sections, or points of discussion. An overview may also include background information about the topic, about your stand, or about the criteria you will use for evaluation.

The overview and the introduction work together to provide a comprehensive beginning for (a “springboard” into) your review.

  • What are the author’s basic premises? What issues are raised, or what themes emerge? What situation (i.e., racism on college campuses) provides a basis for the author’s assertions?
  • How informed is my reader? What background information is relevant to the entire book and should be placed here rather than in a body paragraph?

Write the body

The body is the center of your paper, where you draw out your main arguments. Below are some guidelines to help you write it.

Organize using a logical plan

Organize the body of your review according to a logical plan. Here are two options:

  • First, summarize, in a series of paragraphs, those major points from the book that you plan to discuss; incorporating each major point into a topic sentence for a paragraph is an effective organizational strategy. Second, discuss and evaluate these points in a following group of paragraphs. (There are two dangers lurking in this pattern–you may allot too many paragraphs to summary and too few to evaluation, or you may re-summarize too many points from the book in your evaluation section.)
  • Alternatively, you can summarize and evaluate the major points you have chosen from the book in a point-by-point schema. That means you will discuss and evaluate point one within the same paragraph (or in several if the point is significant and warrants extended discussion) before you summarize and evaluate point two, point three, etc., moving in a logical sequence from point to point to point. Here again, it is effective to use the topic sentence of each paragraph to identify the point from the book that you plan to summarize or evaluate.

Questions to keep in mind as you write

With either organizational pattern, consider the following questions:

  • What are the author’s most important points? How do these relate to one another? (Make relationships clear by using transitions: “In contrast,” an equally strong argument,” “moreover,” “a final conclusion,” etc.).
  • What types of evidence or information does the author present to support his or her points? Is this evidence convincing, controversial, factual, one-sided, etc.? (Consider the use of primary historical material, case studies, narratives, recent scientific findings, statistics.)
  • Where does the author do a good job of conveying factual material as well as personal perspective? Where does the author fail to do so? If solutions to a problem are offered, are they believable, misguided, or promising?
  • Which parts of the work (particular arguments, descriptions, chapters, etc.) are most effective and which parts are least effective? Why?
  • Where (if at all) does the author convey personal prejudice, support illogical relationships, or present evidence out of its appropriate context?

Keep your opinions distinct and cite your sources

Remember, as you discuss the author’s major points, be sure to distinguish consistently between the author’s opinions and your own.

Keep the summary portions of your discussion concise, remembering that your task as a reviewer is to re-see the author’s work, not to re-tell it.

And, importantly, if you refer to ideas from other books and articles or from lecture and course materials, always document your sources, or else you might wander into the realm of plagiarism.

Include only that material which has relevance for your review and use direct quotations sparingly. The Writing Center has other handouts to help you paraphrase text and introduce quotations.

Write the conclusion

You will want to use the conclusion to state your overall critical evaluation.

You have already discussed the major points the author makes, examined how the author supports arguments, and evaluated the quality or effectiveness of specific aspects of the book or article.

Now you must make an evaluation of the work as a whole, determining such things as whether or not the author achieves the stated or implied purpose and if the work makes a significant contribution to an existing body of knowledge.

Consider the following questions:

  • Is the work appropriately subjective or objective according to the author’s purpose?
  • How well does the work maintain its stated or implied focus? Does the author present extraneous material? Does the author exclude or ignore relevant information?
  • How well has the author achieved the overall purpose of the book or article? What contribution does the work make to an existing body of knowledge or to a specific group of readers? Can you justify the use of this work in a particular course?
  • What is the most important final comment you wish to make about the book or article? Do you have any suggestions for the direction of future research in the area? What has reading this work done for you or demonstrated to you?

critical review of methodology

Academic and Professional Writing

This is an accordion element with a series of buttons that open and close related content panels.

Analysis Papers

Reading Poetry

A Short Guide to Close Reading for Literary Analysis

Using Literary Quotations

Play Reviews

Writing a Rhetorical Précis to Analyze Nonfiction Texts

Incorporating Interview Data

Grant Proposals

Planning and Writing a Grant Proposal: The Basics

Additional Resources for Grants and Proposal Writing

Job Materials and Application Essays

Writing Personal Statements for Ph.D. Programs

  • Before you begin: useful tips for writing your essay
  • Guided brainstorming exercises
  • Get more help with your essay
  • Frequently Asked Questions

Resume Writing Tips

CV Writing Tips

Cover Letters

Business Letters

Proposals and Dissertations

Resources for Proposal Writers

Resources for Dissertators

Research Papers

Planning and Writing Research Papers

Quoting and Paraphrasing

Writing Annotated Bibliographies

Creating Poster Presentations

Writing an Abstract for Your Research Paper

Thank-You Notes

Advice for Students Writing Thank-You Notes to Donors

Reading for a Review

Critical Reviews

Writing a Review of Literature

Scientific Reports

Scientific Report Format

Sample Lab Assignment

Writing for the Web

Writing an Effective Blog Post

Writing for Social Media: A Guide for Academics

  • Open access
  • Published: 08 October 2021

Scoping reviews: reinforcing and advancing the methodology and application

  • Micah D. J. Peters 1 , 2 , 3 ,
  • Casey Marnie 1 ,
  • Heather Colquhoun 4 , 5 ,
  • Chantelle M. Garritty 6 ,
  • Susanne Hempel 7 ,
  • Tanya Horsley 8 ,
  • Etienne V. Langlois 9 ,
  • Erin Lillie 10 ,
  • Kelly K. O’Brien 5 , 11 , 12 ,
  • Ӧzge Tunçalp 13 ,
  • Michael G. Wilson 14 , 15 , 16 ,
  • Wasifa Zarin 17 &
  • Andrea C. Tricco   ORCID: orcid.org/0000-0002-4114-8971 17 , 18 , 19  

Systematic Reviews volume  10 , Article number:  263 ( 2021 ) Cite this article

36k Accesses

176 Citations

11 Altmetric

Metrics details

Scoping reviews are an increasingly common approach to evidence synthesis with a growing suite of methodological guidance and resources to assist review authors with their planning, conduct and reporting. The latest guidance for scoping reviews includes the JBI methodology and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses—Extension for Scoping Reviews. This paper provides readers with a brief update regarding ongoing work to enhance and improve the conduct and reporting of scoping reviews as well as information regarding the future steps in scoping review methods development. The purpose of this paper is to provide readers with a concise source of information regarding the difference between scoping reviews and other review types, the reasons for undertaking scoping reviews, and an update on methodological guidance for the conduct and reporting of scoping reviews.

Despite available guidance, some publications use the term ‘scoping review’ without clear consideration of available reporting and methodological tools. Selection of the most appropriate review type for the stated research objectives or questions, standardised use of methodological approaches and terminology in scoping reviews, clarity and consistency of reporting and ensuring that the reporting and presentation of the results clearly addresses the review’s objective(s) and question(s) are critical components for improving the rigour of scoping reviews.

Rigourous, high-quality scoping reviews should clearly follow up to date methodological guidance and reporting criteria. Stakeholder engagement is one area where further work could occur to enhance integration of consultation with the results of evidence syntheses and to support effective knowledge translation. Scoping review methodology is evolving as a policy and decision-making tool. Ensuring the integrity of scoping reviews by adherence to up-to-date reporting standards is integral to supporting well-informed decision-making.

Peer Review reports

Introduction

Given the readily increasing access to evidence and data, methods of identifying, charting and reporting on information must be driven by new, user-friendly approaches. Since 2005, when the first framework for scoping reviews was published, several more detailed approaches (both methodological guidance and a reporting guideline) have been developed. Scoping reviews are an increasingly common approach to evidence synthesis which is very popular amongst end users [ 1 ]. Indeed, one scoping review of scoping reviews found that 53% (262/494) of scoping reviews had government authorities and policymakers as their target end-user audience [ 2 ]. Scoping reviews can provide end users with important insights into the characteristics of a body of evidence, the ways, concepts or terms have been used, and how a topic has been reported upon. Scoping reviews can provide overviews of either broad or specific research and policy fields, underpin research and policy agendas, highlight knowledge gaps and identify areas for subsequent evidence syntheses [ 3 ].

Despite or even potentially because of the range of different approaches to conducting and reporting scoping reviews that have emerged since Arksey and O’Malley’s first framework in 2005, it appears that lack of consistency in use of terminology, conduct and reporting persist [ 2 , 4 ]. There are many examples where manuscripts are titled ‘a scoping review’ without citing or appearing to follow any particular approach [ 5 , 6 , 7 , 8 , 9 ]. This is similar to how many reviews appear to misleadingly include ‘systematic’ in the title or purport to have adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement without doing so. Despite the publication of the PRISMA Extension for Scoping Reviews (PRISMA-ScR) and other recent guidance [ 4 , 10 , 11 , 12 , 13 , 14 ], many scoping reviews continue to be conducted and published without apparent (i.e. cited) consideration of these tools or only cursory reference to Arksey and O’Malley’s original framework. We can only speculate at this stage why many authors appear to be either unaware of or unwilling to adopt more recent methodological guidance and reporting items in their work. It could be that some authors are more familiar and comfortable with the older, less prescriptive framework and see no reason to change. It could be that more recent methodologies such as JBI’s guidance and the PRISMA-ScR appear more complicated and onerous to comply with and so may possibly be unfit for purpose from the perspective of some authors. In their 2005 publication, Arksey and O’Malley themselves called for scoping review (then scoping study) methodology to continue to be advanced and built upon by subsequent authors, so it is interesting to note a persistent resistance or lack of awareness from some authors. Whatever the reason or reasons, we contend that transparency and reproducibility are key markers of high-quality reporting of scoping reviews and that reporting a review’s conduct and results clearly and consistently in line with a recognised methodology or checklist is more likely than not to enhance rigour and utility. Scoping reviews should not be used as a synonym for an exploratory search or general review of the literature. Instead, it is critical that potential authors recognise the purpose and methodology of scoping reviews. In this editorial, we discuss the definition of scoping reviews, introduce contemporary methodological guidance and address the circumstances where scoping reviews may be conducted. Finally, we briefly consider where ongoing advances in the methodology are occurring.

What is a scoping review and how is it different from other evidence syntheses?

A scoping review is a type of evidence synthesis that has the objective of identifying and mapping relevant evidence that meets pre-determined inclusion criteria regarding the topic, field, context, concept or issue under review. The review question guiding a scoping review is typically broader than that of a traditional systematic review. Scoping reviews may include multiple types of evidence (i.e. different research methodologies, primary research, reviews, non-empirical evidence). Because scoping reviews seek to develop a comprehensive overview of the evidence rather than a quantitative or qualitative synthesis of data, it is not usually necessary to undertake methodological appraisal/risk of bias assessment of the sources included in a scoping review. Scoping reviews systematically identify and chart relevant literature that meet predetermined inclusion criteria available on a given topic to address specified objective(s) and review question(s) in relation to key concepts, theories, data and evidence gaps. Scoping reviews are unlike ‘evidence maps’ which can be defined as the figural or graphical presentation of the results of a broad and systematic search to identify gaps in knowledge and/or future research needs often using a searchable database [ 15 ]. Evidence maps can be underpinned by a scoping review or be used to present the results of a scoping review. Scoping reviews are similar to but distinct from other well-known forms of evidence synthesis of which there are many [ 16 ]. Whilst this paper’s purpose is not to go into depth regarding the similarities and differences between scoping reviews and the diverse range of other evidence synthesis approaches, Munn and colleagues recently discussed the key differences between scoping reviews and other common review types [ 3 ]. Like integrative reviews and narrative literature reviews, scoping reviews can include both research (i.e. empirical) and non-research evidence (grey literature) such as policy documents and online media [ 17 , 18 ]. Scoping reviews also address broader questions beyond the effectiveness of a given intervention typical of ‘traditional’ (i.e. Cochrane) systematic reviews or peoples’ experience of a particular phenomenon of interest (i.e. JBI systematic review of qualitative evidence). Scoping reviews typically identify, present and describe relevant characteristics of included sources of evidence rather than seeking to combine statistical or qualitative data from different sources to develop synthesised results.

Similar to systematic reviews, the conduct of scoping reviews should be based on well-defined methodological guidance and reporting standards that include an a priori protocol, eligibility criteria and comprehensive search strategy [ 11 , 12 ]. Unlike systematic reviews, however, scoping reviews may be iterative and flexible and whilst any deviations from the protocol should be transparently reported, adjustments to the questions, inclusion/exclusion criteria and search may be made during the conduct of the review [ 4 , 14 ]. Unlike systematic reviews where implications or recommendations for practice are a key feature, scoping reviews are not designed to underpin clinical practice decisions; hence, assessment of methodological quality or risk of bias of included studies (which is critical when reporting effect size estimates) is not a mandatory step and often does not occur [ 10 , 12 ]. Rapid reviews are another popular review type, but as yet have no consistent, best practice methodology [ 19 ]. Rapid reviews can be understood to be streamlined forms of other review types (i.e. systematic, integrative and scoping reviews) [ 20 ].

Guidance to improve the quality of reporting of scoping reviews

Since the first 2005 framework for scoping reviews (then termed ‘scoping studies’) [ 13 ], the popularity of this approach has grown, with numbers doubling between 2014 and 2017 [ 2 ]. The PRISMA-ScR is the most up-to-date and advanced approach for reporting scoping reviews which is largely based on the popular PRISMA statement and checklist, the JBI methodological guidance and other approaches for undertaking scoping reviews [ 11 ]. Experts in evidence synthesis including authors of earlier guidance for scoping reviews developed the PRISMA-ScR checklist and explanation using a robust and comprehensive approach. Enhancing transparency and uniformity of reporting scoping reviews using the PRISMA-ScR can help to improve the quality and value of a scoping review to readers and end users [ 21 ]. The PRISMA-ScR is not a methodological guideline for review conduct, but rather a complementary checklist to support comprehensive reporting of methods and findings that can be used alongside other methodological guidance [ 10 , 12 , 13 , 14 ]. For this reason, authors who are more familiar with or prefer Arksey and O’Malley’s framework; Levac, Colquhoun and O’Brien’s extension of that framework or JBI’s methodological guidance could each select their preferred methodological approach and report in accordance with the PRISMA-ScR checklist.

Reasons for conducting a scoping review

Whilst systematic reviews sit at the top of the evidence hierarchy, the types of research questions they address are not suitable for every application [ 3 ]. Many indications more appropriately require a scoping review. For example, to explore the extent and nature of a body of literature, the development of evidence maps and summaries; to inform future research and reviews and to identify evidence gaps [ 2 ]. Scoping reviews are particularly useful where evidence is extensive and widely dispersed (i.e. many different types of evidence), or emerging and not yet amenable to questions of effectiveness [ 22 ]. Because scoping reviews are agnostic in terms of the types of evidence they can draw upon, they can be used to bring together and report upon heterogeneous literature—including both empirical and non-empirical evidence—across disciplines within and beyond health [ 23 , 24 , 25 ].

When deciding between whether to conduct a systematic review or a scoping review, authors should have a strong understanding of their differences and be able to clearly identify their review’s precise research objective(s) and/or question(s). Munn and colleagues noted that a systematic review is likely the most suitable approach if reviewers intend to address questions regarding the feasibility, appropriateness, meaningfulness or effectiveness of a specified intervention [ 3 ]. There are also online resources for prospective authors [ 26 ]. A scoping review is probably best when research objectives or review questions involve exploring, identifying, mapping, reporting or discussing characteristics or concepts across a breadth of evidence sources.

Scoping reviews are increasingly used to respond to complex questions where comparing interventions may be neither relevant nor possible [ 27 ]. Often, cost, time, and resources are factors in decisions regarding review type. Whilst many scoping reviews can be quite large with numerous sources to screen and/or include, there is no expectation or possibility of statistical pooling, formal risk of bias rating, and quality of evidence assessment [ 28 , 29 ]. Topics where scoping reviews are necessary abound—for example, government organisations are often interested in the availability and applicability of tools to support health interventions, such as shared decision aids for pregnancy care [ 30 ]. Scoping reviews can also be applied to better understand complex issues related to the health workforce, such as how shift work impacts employee performance across diverse occupational sectors, which involves a diversity of evidence types as well as attention to knowledge gaps [ 31 ]. Another example is where more conceptual knowledge is required, for example, identifying and mapping existing tools [ 32 ]. Here, it is important to understand that scoping reviews are not the same as ‘realist reviews’ which can also be used to examine how interventions or programmes work. Realist reviews are typically designed to ellucide the theories that underpin a programme, examine evidence to reveal if and how those theories are relevant and explain how the given programme works (or not) [ 33 ].

Increased demand for scoping reviews to underpin high-quality knowledge translation across many disciplines within and beyond healthcare in turn fuels the need for consistency, clarity and rigour in reporting; hence, following recognised reporting guidelines is a streamlined and effective way of introducing these elements [ 34 ]. Standardisation and clarity of reporting (such as by using a published methodology and a reporting checklist—the PRISMA-ScR) can facilitate better understanding and uptake of the results of scoping reviews by end users who are able to more clearly understand the differences between systematic reviews, scoping reviews and literature reviews and how their findings can be applied to research, practice and policy.

Future directions in scoping reviews

The field of evidence synthesis is dynamic. Scoping review methodology continues to evolve to account for the changing needs and priorities of end users and the requirements of review authors for additional guidance regarding terminology, elements and steps of scoping reviews. Areas where ongoing research and development of scoping review guidance are occurring include inclusion of consultation with stakeholder groups such as end users and consumer representatives [ 35 ], clarity on when scoping reviews are the appropriate method over other synthesis approaches [ 3 ], approaches for mapping and presenting results in ways that clearly address the review’s research objective(s) and question(s) [ 29 ] and the assessment of the methodological quality of scoping reviews themselves [ 21 , 36 ]. The JBI Scoping Review Methodology group is currently working on this research agenda.

Consulting with end users, experts, or stakeholders has been a suggested but optional component of scoping reviews since 2005. Many of the subsequent approaches contained some reference to this useful activity. Stakeholder engagement is however often lost to the term ‘review’ in scoping reviews. Stakeholder engagement is important across all knowledge synthesis approaches to ensure relevance, contextualisation and uptake of research findings. In fact, it underlines the concept of integrated knowledge translation [ 37 , 38 ]. By including stakeholder consultation in the scoping review process, the utility and uptake of results may be enhanced making reviews more meaningful to end users. Stakeholder consultation can also support integrating knowledge translation efforts, facilitate identifying emerging priorities in the field not otherwise captured in the literature and may help build partnerships amongst stakeholder groups including consumers, researchers, funders and end users. Development in the field of evidence synthesis overall could be inspired by the incorporation of stakeholder consultation in scoping reviews and lead to better integration of consultation and engagement within projects utilising other synthesis methodologies. This highlights how further work could be conducted into establishing how and the extent to which scoping reviews have contributed to synthesising evidence and advancing scientific knowledge and understandings in a more general sense.

Currently, many methodological papers for scoping reviews are published in healthcare focussed journals and associated disciplines [ 6 , 39 , 40 , 41 , 42 , 43 ]. Another area where further work could also occur is to gain greater understanding on how scoping reviews and scoping review methodology is being used across disciplines beyond healthcare including how authors, reviewers and editors understand, recommend or utilise existing guidance for undertaking and reporting scoping reviews.

Whilst available guidance for the conduct and reporting of scoping review has evolved over recent years, opportunities remain to further enhance and progress the methodology, uptake and application. Despite existing guidance, some publications using the term ‘scoping review’ continue to be conducted without apparent consideration of available reporting and methodological tools. Because consistent and transparent reporting is widely recongised as important for supporting rigour, reproducibility and quality in research, we advocate for authors to use a stated scoping review methodology and to transparently report their conduct by using the PRISMA-ScR. Selection of the most appropriate review type for the stated research objectives or questions, standardising the use of methodological approaches and terminology in scoping reviews, clarity and consistency of reporting and ensuring that the reporting and presentation of the results clearly addresses the authors’ objective(s) and question(s) are also critical components for improving the rigour of scoping reviews. We contend that whilst the field of evidence synthesis and scoping reviews continues to evolve, use of the PRISMA-ScR is a valuable and practical tool for enhancing the quality of scoping reviews, particularly in combination with other methodological guidance [ 10 , 12 , 44 ]. Scoping review methodology is developing as a policy and decision-making tool, and so ensuring the integrity of these reviews by adhering to the most up-to-date reporting standards is integral to supporting well informed decision-making. As scoping review methodology continues to evolve alongside understandings regarding why authors do or do not use particular methodologies, we hope that future incarnations of scoping review methodology continues to provide useful, high-quality evidence to end users.

Availability of data and materials

All data and materials are available upon request.

Pham MT, Rajić A, Greig JD, Sargeant JM, Papadopoulos A, McEwen SA. A scoping review of scoping reviews: advancing the approach and enhancing the consistency. Res Synth Methods. 2014;5(4):371–85.

Article   Google Scholar  

Tricco AC, Lillie E, Zarin W, et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16:15.

Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):143.

Peters M, Marnie C, Tricco A, et al. Updated methodological guidance for the conduct of scoping reviews. JBI Evid Synth. 2020;18(10):2119–26.

Paiva L, Dalmolin GL, Andolhe R, dos Santos W. Absenteeism of hospital health workers: scoping review. Av enferm. 2020;38(2):234–48.

Visonà MW, Plonsky L. Arabic as a heritage language: a scoping review. Int J Biling. 2019;24(4):599–615.

McKerricher L, Petrucka P. Maternal nutritional supplement delivery in developing countries: a scoping review. BMC Nutr. 2019;5(1):8.

Article   CAS   Google Scholar  

Fusar-Poli P, Salazar de Pablo G, De Micheli A, et al. What is good mental health? A scoping review. Eur Neuropsychopharmacol. 2020;31:33–46.

Jowsey T, Foster G, Cooper-Ioelu P, Jacobs S. Blended learning via distance in pre-registration nursing education: a scoping review. Nurse Educ Pract. 2020;44:102775.

Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid-based Healthc. 2015;13(3):141–6.

Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

Peters MDJ, Godfrey C, McInerney P, Munn Z, Tricco AC, Khalil H. Chapter 11: scoping reviews (2020 version). In: Aromataris E, Munn Z, editors. JBI manual for evidence synthesis: JBI; 2020.

Google Scholar  

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Levac D, Colquhoun H, O’Brien KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5(1):69.

Miake-Lye IM, Hempel S, Shanman R, Shekelle PG. What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products. Syst Rev. 2016;5(1):28.

Sutton A, Clowes M, Preston L, Booth A. Meeting the review family: exploring review types and associated information retrieval requirements. Health Inf Libr J. 2019;36(3):202–22.

Brady BR, De La Rosa JS, Nair US, Leischow SJ. Electronic cigarette policy recommendations: a scoping review. Am J Health Behav. 2019;43(1):88–104.

Truman E, Elliott C. Identifying food marketing to teenagers: a scoping review. Int J Behav Nutr Phys Act. 2019;16(1):67.

Tricco AC, Antony J, Zarin W, et al. A scoping review of rapid review methods. BMC Med. 2015;13(1):224.

Moher D, Stewart L, Shekelle P. All in the family: systematic reviews, rapid reviews, scoping reviews, realist reviews, and more. Syst Rev. 2015;4(1):183.

Tricco AC, Zarin W, Ghassemi M, et al. Same family, different species: methodological conduct and quality varies according to purpose for five types of knowledge synthesis. J Clin Epidemiol. 2018;96:133–42.

Barker M, Adelson P, Peters MDJ, Steen M. Probiotics and human lactational mastitis: a scoping review. Women Birth. 2020;33(6):e483–e491.

O’Donnell N, Kappen DL, Fitz-Walter Z, Deterding S, Nacke LE, Johnson D. How multidisciplinary is gamification research? Results from a scoping review. Extended abstracts publication of the annual symposium on computer-human interaction in play. Amsterdam: Association for Computing Machinery; 2017. p. 445–52.

O’Flaherty J, Phillips C. The use of flipped classrooms in higher education: a scoping review. Internet High Educ. 2015;25:85–95.

Di Pasquale V, Miranda S, Neumann WP. Ageing and human-system errors in manufacturing: a scoping review. Int J Prod Res. 2020;58(15):4716–40.

Knowledge Synthesis Team. What review is right for you? 2019. https://whatreviewisrightforyou.knowledgetranslation.net/

Lv M, Luo X, Estill J, et al. Coronavirus disease (COVID-19): a scoping review. Euro Surveill. 2020;25(15):2000125.

Shemilt I, Simon A, Hollands GJ, et al. Pinpointing needles in giant haystacks: use of text mining to reduce impractical screening workload in extremely large scoping reviews. Res Synth Methods. 2014;5(1):31–49.

Khalil H, Bennett M, Godfrey C, McInerney P, Munn Z, Peters M. Evaluation of the JBI scoping reviews methodology by current users. Int J Evid-based Healthc. 2020;18(1):95–100.

Kennedy K, Adelson P, Fleet J, et al. Shared decision aids in pregnancy care: a scoping review. Midwifery. 2020;81:102589.

Dall’Ora C, Ball J, Recio-Saucedo A, Griffiths P. Characteristics of shift work and their impact on employee performance and wellbeing: a literature review. Int J Nurs Stud. 2016;57:12–27.

Feo R, Conroy T, Wiechula R, Rasmussen P, Kitson A. Instruments measuring behavioural aspects of the nurse–patient relationship: a scoping review. J Clin Nurs. 2020;29(11-12):1808–21.

Rycroft-Malone J, McCormack B, Hutchinson AM, et al. Realist synthesis: illustrating the method for implementation research. Implement Sci. 2012;7(1):33.

Colquhoun HL, Levac D, O’Brien KK, et al. Scoping reviews: time for clarity in definition, methods, and reporting. J Clin Epidemiol. 2014;67(12):1291–4.

Tricco AC, Zarin W, Rios P, et al. Engaging policy-makers, health system managers, and policy analysts in the knowledge synthesis process: a scoping review. Implement Sci. 2018;13(1):31.

Cooper S, Cant R, Kelly M, et al. An evidence-based checklist for improving scoping review quality. Clin Nurs Res. 2021;30(3):230–240.

Pollock A, Campbell P, Struthers C, et al. Stakeholder involvement in systematic reviews: a scoping review. Syst Rev. 2018;7(1):208.

Tricco AC, Zarin W, Rios P, Pham B, Straus SE, Langlois EV. Barriers, facilitators, strategies and outcomes to engaging policymakers, healthcare managers and policy analysts in knowledge synthesis: a scoping review protocol. BMJ Open. 2016;6(12):e013929.

Denton M, Borrego M. Funds of knowledge in STEM education: a scoping review. Stud Eng Educ. 2021;1(2):71–92.

Masta S, Secules S. When critical ethnography leaves the field and enters the engineering classroom: a scoping review. Stud Eng Educ. 2021;2(1):35–52.

Li Y, Marier-Bienvenue T, Perron-Brault A, Wang X, Pare G. Blockchain technology in business organizations: a scoping review. In: Proceedings of the 51st Hawaii international conference on system sciences ; 2018. https://core.ac.uk/download/143481400.pdf

Houlihan M, Click A, Wiley C. Twenty years of business information literacy research: a scoping review. Evid. Based Libr. Inf. Pract. 2020;15(4):124–163.

Plug I, Stommel W, Lucassen P, Hartman T, Van Dulmen S, Das E. Do women and men use language differently in spoken face-to-face interaction? A scoping review. Rev Commun Res. 2021;9:43–79.

McGowan J, Straus S, Moher D, et al. Reporting scoping reviews - PRISMA ScR extension. J Clin Epidemiol. 2020;123:177–9.

Download references

Acknowledgements

The authors would like to acknowledge the other members of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) working group as well as Shazia Siddiqui, a research assistant in the Knowledge Synthesis Team in the Knowledge Translation Program, Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto.

The authors declare that no specific funding was received for this work. Author ACT declares that she is funded by a Tier 2 Canada Research Chair in Knowledge Synthesis. KKO is supported by a Canada Research Chair in Episodic Disability and Rehabilitation with the Canada Research Chairs Program.

Author information

Authors and affiliations.

University of South Australia, UniSA Clinical and Health Sciences, Rosemary Bryant AO Research Centre, Playford Building P4-27, City East Campus, North Terrace, Adelaide, 5000, South Australia

Micah D. J. Peters & Casey Marnie

Adelaide Nursing School, Faculty of Health and Medical Sciences, The University of Adelaide, 101 Currie St, Adelaide, 5001, South Australia

Micah D. J. Peters

The Centre for Evidence-based Practice South Australia (CEPSA): a Joanna Briggs Institute Centre of Excellence, Faculty of Health and Medical Sciences, The University of Adelaide, 5006, Adelaide, South Australia

Department of Occupational Science and Occupational Therapy, University of Toronto, Terrence Donnelly Health Sciences Complex, 3359 Mississauga Rd, Toronto, Ontario, L5L 1C6, Canada

Heather Colquhoun

Rehabilitation Sciences Institute (RSI), University of Toronto, St. George Campus, 160-500 University Avenue, Toronto, Ontario, M5G 1V7, Canada

Heather Colquhoun & Kelly K. O’Brien

Knowledge Synthesis Group, Ottawa Hospital Research Institute, 1053 Carling Avenue, Ottawa, Ontario, K1Y 4E9, Canada

Chantelle M. Garritty

Southern California Evidence Review Center, University of Southern California, Los Angeles, CA, 90007, USA

Susanne Hempel

Royal College of Physicians and Surgeons of Canada, 774 Echo Drive, Ottawa, Ontario, K1S 5N8, Canada

Tanya Horsley

Partnership for Maternal, Newborn and Child Health (PMNCH), World Health Organisation, Avenue Appia 20, 1211, Geneva, Switzerland

Etienne V. Langlois

Sunnybrook Research Institute, 2075 Bayview Ave, Toronto, Ontario, M4N 3M5, Canada

Erin Lillie

Department of Physical Therapy, University of Toronto, St. George Campus, 160-500 University Avenue, Toronto, Ontario, M5G 1V7, Canada

Kelly K. O’Brien

Institute of Health Policy, Management and Evaluation (IHPME), University of Toronto, St. George Campus, 155 College Street 4th Floor, Toronto, Ontario, M5T 3M6, Canada

UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), Department of Sexual and Reproductive Health and Research, World Health Organisation, Avenue Appia 20, 1211, Geneva, Switzerland

Ӧzge Tunçalp

McMaster Health Forum, McMaster University, 1280 Main Street West, Hamilton, Ontario, L8S 4L8, Canada

Michael G. Wilson

Department of Health Evidence and Impact, McMaster University, 1280 Main Street West, Hamilton, Ontario, L8S 4L8, Canada

Centre for Health Economics and Policy Analysis, McMaster University, 1280 Main Street West, Hamilton, Ontario, L8S 4L8, Canada

Knowledge Translation Program, Li Ka Shing Knowledge Institute, St. Michael’s Hospital, Unity Health Toronto, 209 Victoria Street, East Building, Toronto, Ontario, M5B 1T8, Canada

Wasifa Zarin & Andrea C. Tricco

Epidemiology Division and Institute for Health Policy, Management, and Evaluation, Dalla Lana School of Public Health, University of Toronto, 155 College St, Room 500, Toronto, Ontario, M5T 3M7, Canada

Andrea C. Tricco

Queen’s Collaboration for Health Care Quality Joanna Briggs Institute Centre of Excellence, School of Nursing, Queen’s University, 99 University Ave, Kingston, Ontario, K7L 3N6, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

MDJP, CM, HC, CMG, SH, TH, EVL, EL, KKO, OT, MGW, WZ and AT all made substantial contributions to the conception, design and drafting of the work. MDJP and CM prepared the final version of the manuscript. All authors reviewed and approved the final version of the manuscript.

Corresponding author

Correspondence to Andrea C. Tricco .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

Author ACT is an Associate Editor for the journal. All other authors declare no conflicts of interest.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Peters, M.D.J., Marnie, C., Colquhoun, H. et al. Scoping reviews: reinforcing and advancing the methodology and application. Syst Rev 10 , 263 (2021). https://doi.org/10.1186/s13643-021-01821-3

Download citation

Received : 29 January 2021

Accepted : 27 September 2021

Published : 08 October 2021

DOI : https://doi.org/10.1186/s13643-021-01821-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Scoping reviews
  • Evidence synthesis
  • Research methodology
  • Reporting guidelines
  • Methodological guidance

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

critical review of methodology

PSY290 - Research Methods

  • Identifying & Locating Empirical Research Articles
  • Survey & Test Instruments

Writing a Critical Review

Sample summaries, verbs to help you write the summary, how to read a scholarly article.

  • APA Citation Style Help

A critical review is an academic appraisal of an article that offers both a summary and critical comment. They are useful in evaluating the relevance of a source to your academic needs. They demonstrate that you have understood the text and that you can analyze the main arguments or findings. It is not just a summary; it is an evaluation of what the author has said on a topic. It’s critical in that you thoughtfully consider the validity and accuracy of the author’s claims and that you identify other valid points of view.

An effective critical review has three parts:

  • APA citation of article
  • Clearly summarizes the purpose for the article and identifies the strengths and weaknesses of the research. (In your own words – no quotations.)
  • Evaluates the contribution of the article to the discipline or broad subject area and how it relates to your own research.

Steps to Write a Critical Review:

  • Create and APA style citation for the article you are reviewing.
  • Skim the text: Read the title, abstract, introduction, and conclusion.
  • Read the entire article in order to identify its main ideas and purpose.

Q. What were the authors investigating? What is their thesis? Q. What did the authors hope to discover?

        D. Pay close attention to the methods used by the authors to collection information.

Q. What are the characteristics of the participants? (e.g.) Age/gender/ethnicity

Q. What was the procedure or experimental method/surveys used?

Q. Are their any flaws in the design of their study?

  E. Review the main findings in the “Discussion” or “Conclusion” section. This will help you to evaluate the validity of their evidence, and the credibility of the authors.             Q.   Are their conclusions convincing?            Q.   Were their results significant? If so, describe how they were significant.  F. Evaluate the usefulness of the text to YOU in the context of your own research.

Q. How does this article assist you in your research?

Q. How does it enhance your understanding of this issue?

Q. What gaps in your research does it fill?

Good Summary:

Hock, S., & Rochford, R. A. (2010). A letter-writing campaign: linking academic success and civic engagement. Journal  of Community Engagement and Scholarship, 3 (2), 76-82.

Hock & Rochford (2010) describe how two classes of developmental writing students were engaged in a service-learning project to support the preservation of an on-campus historical site. The goal of the assignment was to help students to see how they have influence in their community by acting as engaged citizens, and to improve their scores on the ACT Writing Sample Assessment (WSA) exam. The authors report that students in developmental classes often feel disempowered, especially when English is not their first language. This assignment not only assisted them in elevating their written communication skills, but it also gave real-life significance to the assignment, and by extension made them feel like empowered members of the community. The advancement in student scores serves as evidence to support my research that when students are given assignments which permit local advocacy and active participation, their academic performance also improves.

Bad Summary:

Two ELL classes complete a service-learning project and improve their writing scores. This article was good because it provided me with lots of information I can use. The students learned a lot in their service-learning project and they passed the ACT exam.  

Remember you're describing what someone else has said. Use verbal cues to make this clear to your reader.  Here are some suggested verbs to use: 

* Adapted from: http://www.laspositascollege.edu/raw/summaries.php

  • << Previous: Survey & Test Instruments
  • Next: APA Citation Style Help >>
  • Last Updated: Apr 18, 2024 5:43 PM
  • URL: https://paradisevalley.libguides.com/PSY290

Banner

Write a Critical Review

Introduction, how can i improve my critical review, ask us: chat, email, visit or call.

Click to chat: contact the library

Video: How to Integrate Critical Voice into Your Literature Review

How to Integrate Critical Voice in Your Lit Review

Video: Note-taking and Writing Tips to Avoid Plagiarism

Note-taking and Writing Tips to Avoid Accidental Plagiarism

More help: Writing

  • Book Writing Appointments Get help on your writing assignments.
  • To introduce the source, its main ideas, key details, and its place within the field
  • To present your assessment of the quality of the source

In general, the introduction of your critical review should include

  • Author(s) name
  • Title of the source 
  • What is the author's central purpose?
  • What methods or theoretical frameworks were used to accomplish this purpose?
  • What topic areas, chapters, sections, or key points did the author use to structure the source?
  • What were the results or findings of the study?
  • How were the results or findings interpreted? How were they related to the original problem (author's view of evidence rather than objective findings)?
  • Who conducted the research? What were/are their interests?
  • Why did they do this research?
  • Was this research pertinent only within the author’s field, or did it have broader (even global) relevance?
  • On what prior research was this source-based? What gap is the author attempting to address?
  • How important was the research question posed by the researcher?
  • Your overall opinion of the quality of the source. Think of this like a thesis or main argument.
  • Present your evaluation of the source, providing evidence from the text (or other sources) to support your assessment.

In general, the body of your critical review should include

  • Is the material organized logically and with appropriate headings?
  • Are there stylistic problems in logical, clarity or language?
  • Were the author(s) able to answer the question (test the hypothesis) raised
  • What was the objective of the study?
  • Does all the information lead coherently to the purpose of the study?
  • Are the methods valid for studying the problem or gap?
  • Could the study be duplicated from the information provided?
  • Is the experimental design logical and reliable?
  • How are the data organized? Is it logical and interpretable?
  • Do the results reveal what the researcher intended?
  • Do the authors present a logical interpretation of the results?
  • Have the limitations of the research been addressed?
  • Does the study consider other key studies in the field or other research possibilities or directions?
  • How was the significance of the work described?
  • Follow the structure of the journal article (e.g. Introduction, Methods, Results, Discussion) - highlighting the strengths and weaknesses in each section
  • Present the weaknesses of the article, and then the strengths of the article (or vice versa).
  • Group your ideas according to different research themes presented in the source
  • Group the strengths and weaknesses of the article into the following areas: originality, reliability, validity, relevance, and presentation

Purpose: 

  • To summarize the strengths and weaknesses of the article as a whole
  • To assert the article’s practical and theoretical significance

In general, the conclusion of your critical review should include

  • A restatement of your overall opinion
  • A summary of the key strengths and weaknesses of the research that support your overall opinion of the source
  • Did the research reported in this source result in the formation of new questions, theories or hypotheses by the authors or other researchers?
  • Have other researchers subsequently supported or refuted the observations or interpretations of these authors?
  • Did the research provide new factual information, a new understanding of a phenomenon in the field, a new research technique?
  • Did the research produce any practical applications? 
  • What are the social, political, technological, or medical implications of this research?
  • How do you evaluate the significance of the research? 
  • Find out what style guide you are required to follow (e.g., APA, MLA, Chicago) and follow the guidelines to create a reference list (may be called a bibliography or works cited).
  • Be sure to include citations in the text when you refer to the source itself or external sources. 
  • Check out our Cite Your Sources Guide for more information. 
  • Read assignment instructions carefully and refer to them throughout the writing process.
  • Make an outline of your main sections before you write.
  • If your professor does not assign a topic or source, you must choose one yourself. Select a source that interests you and is written clearly so you can understand it.
  • << Previous: Start Here
  • Last Updated: Sep 26, 2023 10:58 AM
  • URL: https://guides.lib.uoguelph.ca/CriticalReview

Suggest an edit to this guide

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Home

  • Duke NetID Login
  • 919.660.1100
  • Duke Health Badge: 24-hour access
  • Accounts & Access
  • Databases, Journals & Books
  • Request & Reserve
  • Training & Consulting
  • Request Articles & Books
  • Renew Online
  • Reserve Spaces
  • Reserve a Locker
  • Study & Meeting Rooms
  • Course Reserves
  • Digital Health Device Collection
  • Pay Fines/Fees
  • Recommend a Purchase
  • Access From Off Campus
  • Building Access
  • Computers & Equipment
  • Wifi Access
  • My Accounts
  • Mobile Apps
  • Known Access Issues
  • Report an Access Issue
  • All Databases
  • Article Databases
  • Basic Sciences
  • Clinical Sciences
  • Dissertations & Theses
  • Drugs, Chemicals & Toxicology
  • Grants & Funding
  • Interprofessional Education
  • Non-Medical Databases
  • Search for E-Journals
  • Search for Print & E-Journals
  • Search for E-Books
  • Search for Print & E-Books
  • E-Book Collections
  • Biostatistics
  • Global Health
  • MBS Program
  • Medical Students
  • MMCi Program
  • Occupational Therapy
  • Path Asst Program
  • Physical Therapy
  • Researchers
  • Community Partners

Conducting Research

  • Archival & Historical Research
  • Black History at Duke Health
  • Data Analytics & Viz Software
  • Data: Find and Share
  • Evidence-Based Practice
  • NIH Public Access Policy Compliance
  • Publication Metrics
  • Qualitative Research
  • Searching Animal Alternatives

Systematic Reviews

  • Test Instruments

Using Databases

  • JCR Impact Factors
  • Web of Science

Finding & Accessing

  • COVID-19: Core Clinical Resources
  • Health Literacy
  • Health Statistics & Data
  • Library Orientation

Writing & Citing

  • Creating Links
  • Getting Published
  • Reference Mgmt
  • Scientific Writing

Meet a Librarian

  • Request a Consultation
  • Find Your Liaisons
  • Register for a Class
  • Request a Class
  • Self-Paced Learning

Search Services

  • Literature Search
  • Systematic Review
  • Animal Alternatives (IACUC)
  • Research Impact

Citation Mgmt

  • Other Software

Scholarly Communications

  • About Scholarly Communications
  • Publish Your Work
  • Measure Your Research Impact
  • Engage in Open Science
  • Libraries and Publishers
  • Directions & Maps
  • Floor Plans

Library Updates

  • Annual Snapshot
  • Conference Presentations
  • Contact Information
  • Gifts & Donations
  • What is a Systematic Review?

Types of Reviews

  • Manuals and Reporting Guidelines
  • Our Service
  • 1. Assemble Your Team
  • 2. Develop a Research Question
  • 3. Write and Register a Protocol
  • 4. Search the Evidence
  • 5. Screen Results
  • 6. Assess for Quality and Bias
  • 7. Extract the Data
  • 8. Write the Review
  • Additional Resources
  • Finding Full-Text Articles

Review Typologies

There are many types of evidence synthesis projects, including systematic reviews as well as others. The selection of review type is wholly dependent on the research question. Not all research questions are well-suited for systematic reviews.

  • Review Typologies (from LITR-EX) This site explores different review methodologies such as, systematic, scoping, realist, narrative, state of the art, meta-ethnography, critical, and integrative reviews. The LITR-EX site has a health professions education focus, but the advice and information is widely applicable.

Review the table to peruse review types and associated methodologies. Librarians can also help your team determine which review type might be appropriate for your project. 

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91-108.  doi:10.1111/j.1471-1842.2009.00848.x

  • << Previous: What is a Systematic Review?
  • Next: Manuals and Reporting Guidelines >>
  • Last Updated: Mar 20, 2024 2:21 PM
  • URL: https://guides.mclibrary.duke.edu/sysreview
  • Duke Health
  • Duke University
  • Duke Libraries
  • Medical Center Archives
  • Duke Directory
  • Seeley G. Mudd Building
  • 10 Searle Drive
  • [email protected]

Jetting Phenomenon in Cold Spray: A Critical Review on Finite Element Simulations

  • Published: 15 April 2024

Cite this article

  • S. Rahmati 1 ,
  • J. Mostaghimi 1 ,
  • T. Coyle 1 &
  • A. Dolatabadi 1  

84 Accesses

Explore all metrics

This paper offers a concise critical review of finite element studies of the jetting phenomenon in cold spray (CS). CS is a deposition technique wherein solid particles impact a substrate at high velocities, inducing severe plastic deformation and material deposition. These high-velocity particle impacts lead to the ejection of material in a jet-like shape at the periphery of the particle/substrate interface, a phenomenon known as "jetting". Jetting has been the subject of numerous studies over recent decades and remains a point of debate. Two main mechanisms, Adiabatic Shear Instability (ASI) and Hydrodynamic Pressure-Release (HPR), have been proposed to explain the jetting phenomenon. These mechanisms are mainly elucidated through finite element method (FEM) simulations, a numerical technique rooted in continuum mechanics. However, it is important to emphasize that FEM is limited by the equations established for analysis, and as such, its predictive capabilities are confined to those principles clearly defined within these equations. The choice of employed equations and approaches significantly influence the outcomes and predictions in FEM. While recognizing FEM's capabilities, this study reviews the ASI and HPR mechanisms within the context of CS. Additionally, this paper reviews FEM's algorithms and the core principles that govern FEM in calculating plastic deformation, which can lead to the formation of jetting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

critical review of methodology

Similar content being viewed by others

critical review of methodology

Numerical simulation of friction extrusion: process characteristics and material deformation due to friction

George Diyoke, Lars Rath, … Benjamin Klusemann

critical review of methodology

Theories and Applications of CFD–DEM Coupling Approach for Granular Flow: A Review

Mahmoud A. El-Emam, Ling Zhou, … Ramesh Agarwal

critical review of methodology

Semi-confined blast loading: experiments and simulations of internal detonations

M. Kristoffersen, F. Casadei, … T. Børvik

H. Assadi, H. Kreye, F. Gärtner, and T. Klassen, Cold Spraying—A Materials Perspective, Acta Mater. , 2016, 116 , p 382-407. https://doi.org/10.1016/J.ACTAMAT.2016.06.034

Article   CAS   Google Scholar  

K. Kim, M. Watanabe, and S. Kuroda, Jetting-Out Phenomenon Associated with Bonding of Warm-Sprayed Titanium Particles onto Steel Substrate, J. Therm. Spray Technol. , 2009, 18 , p 490. https://doi.org/10.1007/s11666-009-9379-1

A.A. Tiamiyu, Y. Sun, K.A. Nelson, and C.A. Schuh, Site-Specific Study of Jetting, Bonding, and Local Deformation During High-Velocity Metallic Microparticle Impact, Acta Mater. , 2021, 202 , p 159-169. https://doi.org/10.1016/j.actamat.2020.10.057

M. Razavipour, S. Rahmati, A. Zúñiga, D. Criado, and B. Jodoin, Bonding Mechanisms in Cold Spray: Influence of Surface Oxidation During Powder Storage, J. Therm. Spray Technol. , 2020 https://doi.org/10.1007/s11666-020-01123-5

Article   Google Scholar  

R. Nikbakht, M. Saadati, T.-S. Kim, M. Jahazi, H.S. Kim, and B. Jodoin, Cold Spray Deposition Characteristic and Bonding of CrMnCoFeNi High Entropy Alloy, Surf. Coat. Technol. , 2021, 425 , p 127748. https://doi.org/10.1016/j.surfcoat.2021.127748

H. Assadi, F. Gärtner, T. Stoltenhoff, and H. Kreye, Bonding Mechanism in Cold Gas Spraying, Acta Mater. , 2003, 51 , p 4379-4394. https://doi.org/10.1016/S1359-6454(03)00274-X

M. Hassani-Gangaraj, D. Veysset, V.K. Champagne, K.A. Nelson, and C.A. Schuh, Adiabatic Shear Instability is Not Necessary for Adhesion in Cold Spray, Acta Mater. , 2018, 158 , p 430-439. https://doi.org/10.1016/J.ACTAMAT.2018.07.065

H. Assadi, F. Gärtner, T. Klassen, and H. Kreye, Comment on ‘Adiabatic Shear Instability is Not Necessary for Adhesion in Cold Spray,’ Scr. Mater. , 2019, 162 , p 512-514. https://doi.org/10.1016/j.scriptamat.2018.10.036

M. Hassani-Gangaraj, D. Veysset, V.K. Champagne, K.A. Nelson, and C.A. Schuh, Response to Comment on “Adiabatic Shear Instability is Not Necessary for Adhesion in Cold Spray,” Scr. Mater. , 2019, 162 , p 515-519. https://doi.org/10.1016/j.scriptamat.2018.12.015

S. Rahmati and A. Ghaei, The Use of Particle/Substrate Material Models in Simulation of Cold-Gas Dynamic-Spray Process, J. Therm. Spray Technol. , 2014, 23 , p 530-540. https://doi.org/10.1007/s11666-013-0051-4

W.H. Johnson and G.R. Cook, A constitutive model and data for metals subjected to large strains, high strain rates and high. Proceedings of the 7th International Symposium on Ballistics, The Hague, Netherlands (1983), p. 541-547

C.Y. Gao and L.C. Zhang, A Constitutive Model for Dynamic Plasticity of FCC Metals, Mater. Sci. Eng. A , 2010, 527 , p 3138-3143. https://doi.org/10.1016/j.msea.2010.01.083

P. Landau, S. Osovski, A. Venkert, V. Gärtnerová, and D. Rittel, The Genesis of Adiabatic Shear Bands, Sci. Rep. , 2016, 6 , p 37226. https://doi.org/10.1038/srep37226

Article   CAS   PubMed   PubMed Central   Google Scholar  

V. Abaqus, 6.14 Online Documentation Help Theory Manual: Dassault Systms , Simulia Inc., Johnston, 2016.

Google Scholar  

M. Grujicic, C.L. Zhao, W.S. DeRosset, and D. Helfritch, Adiabatic Shear Instability Based Mechanism for Particles/Substrate Bonding in the Cold-Gas Dynamic-Spray Process, Mater. Des. , 2004, 25 , p 681-688. https://doi.org/10.1016/j.matdes.2004.03.008

G. Bae, Y. Xiong, S. Kumar, K. Kang, and C. Lee, General Aspects of Interface Bonding in Kinetic Sprayed Coatings, Acta Mater. , 2008, 56 , p 4858-4868. https://doi.org/10.1016/J.ACTAMAT.2008.06.003

C.J. Akisin, C.J. Bennett, F. Venturi, H. Assadi, and T. Hussain, Numerical and Experimental Analysis of the Deformation Behavior of CoCrFeNiMn High Entropy Alloy Particles onto Various Substrates During Cold Spraying, J. Therm. Spray Technol. , 2022, 31 , p 1085-1111. https://doi.org/10.1007/s11666-022-01377-1

Q. Chen, W. Xie, V.K. Champagne, A. Nardi, J.-H. Lee, and S. Müftü, On Adiabatic Shear Instability in Impacts of Micron-Scale Al-6061 Particles with Sapphire and Al-6061 Substrates, Int. J. Plast. , 2023, 166 , p 103630. https://doi.org/10.1016/j.ijplas.2023.103630

L. Palodhi and H. Singh, On the Dependence of Critical Velocity on the Material Properties During Cold Spray Process, J. Therm. Spray Technol. , 2020, 29 , p 1863-1875. https://doi.org/10.1007/s11666-020-01105-7

F. Meng, S. Yue, and J. Song, Quantitative Prediction of Critical Velocity and Deposition Efficiency in Cold-Spray: A Finite-Element Study, Scr. Mater. , 2015, 107 , p 83-87. https://doi.org/10.1016/j.scriptamat.2015.05.026

F. Meng, H. Aydin, S. Yue, and J. Song, The Effects of Contact Conditions on the Onset of Shear Instability in Cold-Spray, J. Therm. Spray Technol. , 2015, 24 , p 711-719. https://doi.org/10.1007/s11666-015-0229-z

C.-J. Li, W.-Y. Li, and H. Liao, Examination of the Critical Velocity for Deposition of Particles in Cold Spraying, J. Therm. Spray Technol. , 2006, 15 , p 212-222. https://doi.org/10.1361/105996306X108093

W.-Y. Li and W. Gao, Some Aspects on 3D Numerical Modeling of High Velocity Impact of Particles in Cold Spraying by Explicit Finite Element Analysis, Appl. Surf. Sci. , 2009, 255 , p 7878-7892. https://doi.org/10.1016/J.APSUSC.2009.04.135

W.-Y. Li, H. Liao, C.-J. Li, G. Li, C. Coddet, and X. Wang, On High Velocity Impact of Micro-Sized Metallic Particles in Cold Spraying, Appl. Surf. Sci. , 2006, 253 , p 2852. https://doi.org/10.1016/j.apsusc.2006.05.126

M.A. Adaan-Nyiak and A.A. Tiamiyu, Recent Advances on Bonding Mechanism in Cold Spray Process: A Review of Single-Particle Impact Methods, J. Mater. Res. , 2023, 38 , p 69-95. https://doi.org/10.1557/s43578-022-00764-2

Article   CAS   PubMed   Google Scholar  

W.-Y. Li, S. Yin, and X.-F. Wang, Numerical Investigations of the Effect of Oblique Impact on Particle Deformation in Cold Spraying by the SPH Method, Appl. Surf. Sci. , 2010, 256 , p 3725-3734. https://doi.org/10.1016/j.apsusc.2010.01.014

M. Yu, W.-Y. Li, F.F. Wang, and H.L. Liao, Finite Element Simulation of Impacting Behavior of Particles in Cold Spraying by Eulerian Approach, J. Therm. Spray Technol. , 2012, 21 , p 745-752. https://doi.org/10.1007/s11666-011-9717-y

B. Yildirim, S. Muftu, and A. Gouldstone, Modeling of high velocity impact of spherical particles, Wear , 2011, 270 , p 703-713. https://doi.org/10.1016/j.wear.2011.02.003

S. Lepi, Practical Guide to Finite Elements: A Solid Mechanics Approach , Taylor & Francis, Oxford, 1998.

R. Hedayati and M. Sadighi, Bird Strike: An Experimental Theoretical and Numerical Investigation , Elsevier, Amsterdam, 2015.

K.H. Huebner, D.L. Dewhirst, D.E. Smith, and T.G. Byrom, The Finite Element Method for Engineers , Wiley, New York, 2001.

P. Wriggers, Nonlinear Finite Element Methods , Springer, Berlin Heidelberg, 2008.

L.M. Pereira, A. Zúñiga, B. Jodoin, R.G.A. Veiga, and S. Rahmati, Unraveling Jetting Mechanisms in High-Velocity Impact of Copper Particles Using Molecular Dynamics Simulations, Addit. Manuf. , 2023, 75 , p 103755. https://doi.org/10.1016/j.addma.2023.103755

F. Dunne and N. Petrinic, Introduction to Computational Plasticity , OUP Oxford, Oxford, 2005.

Book   Google Scholar  

E.A. de Souza Neto, D. Peric, and D.R.J. Owen, Computational Methods for Plasticity: Theory and Applications , Wiley, New York, 2011.

A. Khoei, Computational Plasticity in Powder Forming Processes , Elsevier, Amsterdam, 2010.

Q.H. Shah and H. Abid, LS-DYNA for Beginners: An Insight Into Ls-Prepost and Ls-Dyna , LAP Lambert Academic Publishing, Saarbrucken, 2012.

L.M. Kachanov, Fundamentals of the Theory of Plasticity , Dover Publications, Mineola, 2013.

M. Okereke and S. Keates, Finite Element Applications: A Practical Guide to the FEM Process , Springer, Cham, 2018.

C.Y. Gao, FE Realization of a Thermo-Visco-Plastic Constitutive Model Using VUMAT in ABAQUS/Explicit Program. Computational Mechanics: Proceedings of International Symposium on Computational Mechanics (Springer, Berlin, Heidelberg, 2009), p. 301

L. Ming and O. Pantalé, An Efficient and Robust VUMAT Implementation of Elastoplastic Constitutive Laws in Abaqus/Explicit Finite Element Code, Mech. Ind. , 2018, 19 , p 308. https://doi.org/10.1051/meca/2018021

G.N. Devi, S. Kumar, T.B. Mangalarapu, G. Vinay, N.M. Chavan, and A.V. Gopal, Assessing Critical Process Condition for Bonding in Cold Spraying, Surf. Coat. Technol. , 2023, 470 , p 129839. https://doi.org/10.1016/j.surfcoat.2023.129839

Z. Dai, F. Xu, J. Wang, and L. Wang, Investigation of Dynamic Contact Between Cold Spray Particles and Substrate Based on 2D SPH Method, Int. J. Solids Struct. , 2023, 284 , p 112520. https://doi.org/10.1016/j.ijsolstr.2023.112520

S. Rahmati, A. Zúñiga, B. Jodoin, and R.G.A. Veiga, Deformation of Copper Particles Upon Impact: A Molecular Dynamics Study of Cold Spray, Comput. Mater. Sci. , 2020, 171 , p 109219. https://doi.org/10.1016/j.commatsci.2019.109219

N. Deng, D. Qu, K. Zhang, G. Liu, S. Li, and Z. Zhou, Simulation and Experimental Study on Cold Sprayed WCu Composite with High Retainability of W Using Core-Shell Powder, Surf. Coat. Technol. , 2023, 466 , p 129639. https://doi.org/10.1016/j.surfcoat.2023.129639

P. Khamsepour, C. Moreau, and A. Dolatabadi, Effect of Particle and Substrate Pre-heating on the Oxide Layer and Material Jet Formation in Solid-State Spray Deposition: A Numerical Study, J. Therm. Spray Technol. , 2023, 32 , p 1153-1166. https://doi.org/10.1007/s11666-022-01509-7

S. Rahmati and B. Jodoin, Physically Based Finite Element Modeling Method to Predict Metallic Bonding in Cold Spray, J. Therm. Spray Technol. , 2020, 29 , p 611-629. https://doi.org/10.1007/s11666-020-01000-1

S. Rahmati, R.G.A. Veiga, A. Zúñiga, and B. Jodoin, A Numerical Approach to Study the Oxide Layer Effect on Adhesion in Cold Spray, J. Therm. Spray Technol. , 2021 https://doi.org/10.1007/s11666-021-01245-4

W.Y. Li, C. Zhang, C.-J. Li, and H. Liao, Modeling Aspects of High Velocity Impact of Particles in Cold Spraying by Explicit Finite Element Analysis, ASM Int. , 2009, 18 , p 921-933.

CAS   Google Scholar  

Download references

Author information

Authors and affiliations.

Centre for Advanced Coating Technologies, University of Toronto, Toronto, ON, Canada

S. Rahmati, J. Mostaghimi, T. Coyle & A. Dolatabadi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to S. Rahmati .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Cuboid Model

In the ABAQUS/Explicit software (Ref 14 ), a two-dimensional Lagrangian model employing four-node plane strain elements was employed to simulate the impact of a 50 µm copper cuboid onto a rigid wall (utilizing analytical rigid (Ref 14 )). A mesh size of 1 µm was opted for to ensure accurate representation of the extensive deformation induced during high-velocity impacts. It's worth noting that various element sizes were evaluated, and 1 μm was selected for its promising results in this study. A schematic representation of the simulation setup is shown in Fig. 16 . The impact velocity was set at 500 m/s, and the initial temperature was maintained at room temperature (300 K). Outputs were saved for each increment to capture the progressive behavior. For contact formulation, surface-to-surface contact was employed. The underside of the cuboid that collided with the substrate was defined as the slave surface (second surface), while the rigid wall surface was selected as the master surface (first surface) (Ref 14 ). The contact property was configured with a normal behavior (hard contact) using the default settings (Ref 14 ). Additionally, the material behavior was assumed to be linear elastic in this simulation The material properties utilized for this simulation are outlined in Table 1 .

figure 16

Schematic representation of the simulation setup used in this study

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Rahmati, S., Mostaghimi, J., Coyle, T. et al. Jetting Phenomenon in Cold Spray: A Critical Review on Finite Element Simulations. J Therm Spray Tech (2024). https://doi.org/10.1007/s11666-024-01766-8

Download citation

Received : 14 December 2023

Revised : 28 February 2024

Accepted : 20 March 2024

Published : 15 April 2024

DOI : https://doi.org/10.1007/s11666-024-01766-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • finite elements method
  • Find a journal
  • Publish with us
  • Track your research

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Transformations That Work

  • Michael Mankins
  • Patrick Litre

critical review of methodology

More than a third of large organizations have some type of transformation program underway at any given time, and many launch one major change initiative after another. Though they kick off with a lot of fanfare, most of these efforts fail to deliver. Only 12% produce lasting results, and that figure hasn’t budged in the past two decades, despite everything we’ve learned over the years about how to lead change.

Clearly, businesses need a new model for transformation. In this article the authors present one based on research with dozens of leading companies that have defied the odds, such as Ford, Dell, Amgen, T-Mobile, Adobe, and Virgin Australia. The successful programs, the authors found, employed six critical practices: treating transformation as a continuous process; building it into the company’s operating rhythm; explicitly managing organizational energy; using aspirations, not benchmarks, to set goals; driving change from the middle of the organization out; and tapping significant external capital to fund the effort from the start.

Lessons from companies that are defying the odds

Idea in Brief

The problem.

Although companies frequently engage in transformation initiatives, few are actually transformative. Research indicates that only 12% of major change programs produce lasting results.

Why It Happens

Leaders are increasingly content with incremental improvements. As a result, they experience fewer outright failures but equally fewer real transformations.

The Solution

To deliver, change programs must treat transformation as a continuous process, build it into the company’s operating rhythm, explicitly manage organizational energy, state aspirations rather than set targets, drive change from the middle out, and be funded by serious capital investments.

Nearly every major corporation has embarked on some sort of transformation in recent years. By our estimates, at any given time more than a third of large organizations have a transformation program underway. When asked, roughly 50% of CEOs we’ve interviewed report that their company has undertaken two or more major change efforts within the past five years, with nearly 20% reporting three or more.

  • Michael Mankins is a leader in Bain’s Organization and Strategy practices and is a partner based in Austin, Texas. He is a coauthor of Time, Talent, Energy: Overcome Organizational Drag and Unleash Your Team’s Productive Power (Harvard Business Review Press, 2017).
  • PL Patrick Litre leads Bain’s Global Transformation and Change practice and is a partner based in Atlanta.

Partner Center

This paper is in the following e-collection/theme issue:

Published on 22.4.2024 in Vol 26 (2024)

Patient and Staff Experience of Remote Patient Monitoring—What to Measure and How: Systematic Review

Authors of this article:

Author Orcid Image

  • Valeria Pannunzio 1 , PhD   ; 
  • Hosana Cristina Morales Ornelas 2 , MSc   ; 
  • Pema Gurung 3 , MSc   ; 
  • Robert van Kooten 4 , MD, PhD   ; 
  • Dirk Snelders 1 , PhD   ; 
  • Hendrikus van Os 5 , MD, PhD   ; 
  • Michel Wouters 6 , MD, PhD   ; 
  • Rob Tollenaar 4 , MD, PhD   ; 
  • Douwe Atsma 7 , MD, PhD   ; 
  • Maaike Kleinsmann 1 , PhD  

1 Department of Design, Organisation and Strategy, Faculty of Industrial Design Engineering, Delft University of Technology, Delft, Netherlands

2 Department of Sustainable Design Engineering, Faculty of Industrial Design Engineering, Delft University of Technology, Delft, Netherlands

3 Walaeus Library, Leiden University Medical Center, Leiden, Netherlands

4 Department of Surgery, Leiden University Medical Center, Leiden, Netherlands

5 National eHealth Living Lab, Department of Public Health & Primary Care, Leiden University Medical Center, Leiden, Netherlands

6 Department of Surgery, Netherlands Cancer Institute – Antoni van Leeuwenhoek, Amsterdam, Netherlands

7 Department of Cardiology, Leiden University Medical Center, Leiden, Netherlands

Corresponding Author:

Valeria Pannunzio, PhD

Department of Design, Organisation and Strategy

Faculty of Industrial Design Engineering

Delft University of Technology

Landbergstraat 15

Delft, 2628 CE

Netherlands

Phone: 31 15 27 81460

Email: [email protected]

Background: Patient and staff experience is a vital factor to consider in the evaluation of remote patient monitoring (RPM) interventions. However, no comprehensive overview of available RPM patient and staff experience–measuring methods and tools exists.

Objective: This review aimed at obtaining a comprehensive set of experience constructs and corresponding measuring instruments used in contemporary RPM research and at proposing an initial set of guidelines for improving methodological standardization in this domain.

Methods: Full-text papers reporting on instances of patient or staff experience measuring in RPM interventions, written in English, and published after January 1, 2011, were considered for eligibility. By “RPM interventions,” we referred to interventions including sensor-based patient monitoring used for clinical decision-making; papers reporting on other kinds of interventions were therefore excluded. Papers describing primary care interventions, involving participants under 18 years of age, or focusing on attitudes or technologies rather than specific interventions were also excluded. We searched 2 electronic databases, Medline (PubMed) and EMBASE, on February 12, 2021.We explored and structured the obtained corpus of data through correspondence analysis, a multivariate statistical technique.

Results: In total, 158 papers were included, covering RPM interventions in a variety of domains. From these studies, we reported 546 experience-measuring instances in RPM, covering the use of 160 unique experience-measuring instruments to measure 120 unique experience constructs. We found that the research landscape has seen a sizeable growth in the past decade, that it is affected by a relative lack of focus on the experience of staff, and that the overall corpus of collected experience measures can be organized in 4 main categories (service system related, care related, usage and adherence related, and health outcome related). In the light of the collected findings, we provided a set of 6 actionable recommendations to RPM patient and staff experience evaluators, in terms of both what to measure and how to measure it. Overall, we suggested that RPM researchers and practitioners include experience measuring as part of integrated, interdisciplinary data strategies for continuous RPM evaluation.

Conclusions: At present, there is a lack of consensus and standardization in the methods used to measure patient and staff experience in RPM, leading to a critical knowledge gap in our understanding of the impact of RPM interventions. This review offers targeted support for RPM experience evaluators by providing a structured, comprehensive overview of contemporary patient and staff experience measures and a set of practical guidelines for improving research quality and standardization in this domain.

Introduction

Background and aim.

This is a scenario from the daily life of a patient:

A beeping sound, and a message appears on the smartphone screen: “Reminder: check glucose before bedtime.” Time to go to sleep, indeed, you think while putting down your book and reaching for the glucometer. As you wipe the drop of blood away, you make sure that both Bluetooth and Wi-Fi are on in your phone. Then, the reading is sent: you notice it seems to be rather far from your baseline. While you think of what you might have done differently, a slight agitation emerges: Is this why you feel so tired? The phone beeps again: “Your last glucose reading seems atypical. Could you please try again? Remember to follow these steps.” Groaning, you unwrap another alcohol wipe, rub your finger with it, and test again: this time, the results are normal.

Some patients will recognize certain aspects of this scenario, particularly the ones using a form of remote patient monitoring (RPM), sometimes referred to as remote patient management. RPM is a subset of digital health interventions that aim to improve patient care through digitally transmitted, health-related patient data [ 1 ]. Typically, RPM interventions include the use of 1 or more sensors (including monitoring devices, wearables, or implants), which collect patient data in or out of the hospital to be used for remote clinical decision-making. Partly due to a rapid expansion during the COVID-19 pandemic [ 2 - 5 ], the RPM domain has by now expanded to reach a broad range of medical specialties, sensing technologies, and clinical contexts [ 1 , 6 , 7 ].

RPM is presented as a strategy for enabling health care systems worldwide to face the pressing challenges posed by aging populations [ 8 - 10 ], including the dwindling availability of health care workers [ 11 ] and rising health care costs [ 12 ]. This is because deploying effective RPM solutions across health systems holds the potential to reduce health care resources use, while maintaining or improving care quality. However, evidence regarding RPM effectiveness at scale is mixed [ 13 ]. Few large-scale trials demonstrating a meaningful clinical impact of RPM have been conducted so far, and more research is urgently needed to clarify and address determinants of RPM effectiveness [ 7 ].

Among these determinants, we find the experience of patients and staff using RPM interventions. As noticeable in the introductory scenario, RPM introduces radical experiential changes compared to in-person care; patients might be asked to download and install software; pair, charge, and wear monitoring devices; submit personal data; or attend alerts or calls, all in the midst of everyday life contexts and activities. Similarly, clinical and especially nursing staff might be asked to carry out data analysis and administrative work and maintain remote contact with patients, often without a clear definition of roles and responsibilities and in addition to usual tasks [ 14 ].

Because of these changes, patient and staff experience constitutes a crucial aspect to consider when evaluating RPM interventions. Next to qualitative methods of experience evaluation, mixed and quantitative methods are fundamental, especially to capture information from large pools of users. However, the current RPM experience-measuring landscape suffers from a lack of methodological standardization, reflected in both what is measured and how it is measured. Regarding what is measured, it has been observed that a large number of constructs are used in the literature, often without a clear specification of their significance. This can be noticed even regarding popular constructs, such as satisfaction: Mair and Whitten [ 15 ], for instance, observe how the meaning of the satisfaction construct is seldom defined in patient surveys, leaving readers “unable to discern whether the participants said they were satisfied because telemedicine didn't kill them, or that it was ‘OK,’ or that it was a wonderful experience.” Previous work also registers a broad diversity in the instruments used to measure a specific construct. For instance, in their review of RPM interventions for heart failure, Kraai et al [ 16 ] report that none of the papers they examined used the same survey to measure patient satisfaction, and only 1 was assessed on validity and reliability.

In this proliferation of constructs and instruments, no comprehensive overview exists of their application to measuring patient and staff experience in the RPM domain. The lack of such an overview negatively affects research in this domain in at least 2 ways. At the level of primary research, RPM practitioners and researchers have little guidance on how to include experience measuring in their study designs. At the level of secondary research, the lack of consistently used measures makes it hard to compare results between different studies and RPM solutions. Altogether, the lack of standardization in experience measuring constitutes a research gap that needs to be bridged in order for RPM to fully deliver on its promises.

In this review, this gap is addressed through an effort to provide a structured overview of patient and staff experience constructs and instruments used in RPM evaluation. First, we position the role of RPM-related patient and staff experience within the broader system of care using the Quadruple Aim framework. Next, we describe the systematic review we performed of patient and staff experience–relevant constructs and instruments used in contemporary research aimed at evaluating RPM interventions. After presenting and discussing the results of this review, we propose a set of guidelines for RPM experience evaluators and indicate directions for further research.

The Role of Patient and Staff Experience in RPM

Many characterizations of patient and staff experience exist [ 17 - 19 ], some of which distinguish between determinants of experience and experience manifestations [ 20 ]. For our review, we maintained this distinction, as we aimed to focus on the broad spectrum of factors affecting and affected by patient and staff experience. To do so, we adopted the general conceptualization of patient and staff experience as characterized in the Quadruple Aim, a widely used framework for health system optimization centered around 4 overarching goals: improving the individual experience of care, improving the experience of providing care, improving the health of populations, and reducing the per capita cost of care [ 21 ]. Adopting a Quadruple Aim perspective allows health system researchers and innovators to recognize not only the importance of patient and staff experience in their own rights but also the inextricable relations of these 2 goals to the other dimensions of health system performance [ 22 ]. To clarify the nature of these relations in the RPM domain, we provide a schematic overview in Figure 1 .

critical review of methodology

Next, we refer to the numbers in Figure 1 to touch upon prominent relationships between patient and staff experience in RPM within the Quadruple Aim framework and provide examples of experience constructs relevant to each relationship:

  • Numbers 1 and 2: The characteristics of specific RPM interventions directly affect the patient and staff experience. Examples of experience constructs related to this mechanism are expressed in terms of usability or wearability , which are attributes of systems or products contributing to the care experience of patients and the work experience of staff.
  • Numbers 3 and 4: Patient and staff experiences relate to each other through care delivery. Human connections, especially in the form of carer-patient relationships, represent a major factor in both patient and staff experience. An example of experience constructs related to this mechanism is expressed in terms of the quality of the relationship .
  • Numbers 5 and 6: A major determinant of patient experience is represented by the health outcomes achieved as a result of the received care. An example of a measure of quality related to this mechanism is expressed in terms of the quality of life , which is an attribute of patient experience directly affected by a patient’s health status. In contrast, patient experience itself is a determinant of the clinical effectiveness of RPM interventions. For example, the patient experience afforded by a given intervention is a determinant of both adoption of and adherence to that intervention, ultimately affecting its clinical impact. In a recent review, for instance, low patient adherence was identified as the main factor associated with ineffective RPM services [ 23 ].
  • Number 7: Similarly, staff experience can be a determinant of clinical effectiveness. Experience-related issues, such as alarm fatigue , contribute to medical errors and lower the quality of care delivery [ 24 ].
  • Number 8: Staff experience can also impact the cost of care. For example, the time effort required for the use of a given intervention can constitute a source of extra costs. More indirectly, low staff satisfaction and excessive workload can increase health care staff turnover, resulting in additional expenses at the level of the health system.

Overall, the overview in Figure 1 can help us grasp the nuances of the role of patient and staff experience on the overall impact of RPM interventions, as well as the importance of measuring experience factors, not only in isolation, but also in relation to other dimensions of care quality. In this review, we therefore covered a broad range of experience-relevant factors, including both experiential determinants (eg, usability) and manifestations (eg, adherence). Overall, this study aimed to obtain a comprehensive set of experience constructs and corresponding measurement instruments used in contemporary RPM research and to propose an initial set of guidelines for improving methodological standardization in this domain.

Protocol Registration and PRISMA Guidelines

The study protocol was registered in the PROSPERO (International Prospective Register of Systematic Reviews) database (CRD42021250707). This systematic review adhered to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. The PRISMA checklist is provided in Multimedia Appendix 1 [ 25 ].

Criteria for Study Eligibility

Our study population consisted of adult (≥18 years old) patients and staff members involved as participants in reported RPM evaluations. Full-text papers reporting instances of patient and staff experience measuring in RPM interventions, written in English, and published after January 1, 2011, were considered for eligibility.

For the scope of our review, we considered as RPM any intervention possessing the following characteristics:

  • Sensor-based patient monitoring, intended as the use of at least 1 sensor to collect patient information at a distance. Therefore, we excluded interventions that were purely based on the collection of “sensor-less” self-reported measures from patients. This is because we believe the use of sensors constitutes a key element of RPM and one that strongly contributes to experiential aspects in this domain. However, we adopted a broad definition of “sensor,” considering as such, for instance, smartphone cameras (eg, postoperative wound-monitoring apps) and analog scales or thermometers (eg, interventions relying on patients submitting manually entered weights or temperatures). By “at a distance,” we meant not only cases in which data were transferred from nonclinical environments, such as home monitoring, but also cases such as tele–intensive care units (tele-ICUs), in which data were transferred from one clinical environment to another. Furthermore, we included interventions relying on both continuous and intermittent monitoring.
  • Clinical decision-making as an intended use of remotely collected data. Therefore, we excluded interventions in which the collected data were meant to be used exclusively for research purposes and not as a stage of development of an RPM intervention to be adopted in patient care. For instance, we excluded cases in which the remotely collected patient data were only used to test research hypotheses unrelated to the objective of implementing RPM interventions (eg, for drug development purposes). This is because in this review we were interested in RPM as a tool for the provision of remote patient care, rather than as an instrument for research. We also excluded interventions in which patients themselves were the only recipients of the collected data and no health care professional was involved in the data analysis and use.

Furthermore, we excluded:

  • Evaluations of attitudes, not interventions: contributions in which only general attitudes toward RPM in abstract were investigated, rather than 1 or more specific RPM interventions.
  • Not reporting any evaluation: contributions not focusing on the evaluation of 1 or more specific RPM interventions, for instance, papers providing theoretical perspectives on the field (eg, research frameworks or theoretical models).
  • Evaluation of technology, not interventions: contributions only focused on evaluating RPM-related technology, for instance, papers focused on testing sensors, software, or other service components in isolation rather than as a part of any specific RPM intervention.
  • Not just RPM: contributions not specifically focused on RPM but including RPM interventions in their scope of research, for instance, papers reporting on surveys obtained from broad cohorts of patients (including RPM recipients) in a noncontrolled way. An example of such contributions would be represented by studies focusing on patient experience with mobile health apps in general, covering both interventions involving RPM and interventions not including any kind of patient monitoring, without a clear way to distinguish between the 2 kinds of interventions in the contribution results. This was chosen in order to maintain the review focus on RPM interventions. Instead, papers including both RPM and other forms of care provisions within the same intervention were included, as well as papers comparing RPM to non-RPM interventions in a controlled way.
  • Primary care intervention only: interventions only involving general practitioners (GPs) and other primary care practitioners as health care providers of the RPM intervention. This is because we expected marked differences between the implementation of RPM in primary care and at other levels of care, due to deep dissimilarities in settings, workflows, and routines. Examples of RPM interventions only involving primary care providers included kiosk systems (for which a common measuring point was provided to many patients) or pharmacy-managed medication-monitoring programs. RPM interventions involving primary care providers and providers from higher levels of care, however, were included in the review.
  • Staff-to-staff intervention: contributions reporting on interventions exclusively directed at staff, for instance, papers reporting on RPM methods aimed at monitoring stress levels of health care workers.
  • Target group other than patient or staff: contributions aimed at collecting experience measures in target groups other than patients or staff, for instance, papers investigating the experience in RPM for informal caregivers.

Search Method

To identify relevant publications, the following electronic databases were searched: (1) Medline (PubMed) and (2) EMBASE. Search terms included controlled terms from Medical Subject Headings (MeSH) in PubMed and Emtree in EMBASE, as well as free-text terms. Query term selection and structuring were performed collaboratively by authors VP, HCMO, and PG (who is a clinical librarian at the Leiden University medical library). The full search strategies are reported in Multimedia Appendix 2 . Because the aim of the review was to paint a contemporary picture of experience measures used in RPM, only studies published starting from January 1, 2011, were included.

Study Selection

Study selection was performed by VP and HCMO, who used Rayyan, an online research tool for managing review studies [ 26 ], to independently screen both titles and abstracts in the initial screening and full texts in the final screening. Discrepancies were solved by discussion. A flowchart of study selection is depicted in Figure 2 .

critical review of methodology

Quality Appraisal

The objective of this review was to provide a comprehensive overview of the relevant literature, rather than a synthesis of specific intervention outcomes. Therefore, no papers were excluded based on the quality appraisal, in alignment with similar studies [ 27 ].

Data Extraction and Management

Data extraction was performed independently by VP and HCMO. The extraction was performed in a predefined Microsoft Excel sheet designed by VP and HCMO. The sheet was first piloted in 15 included studies and iterated upon to optimize the data extraction process. The full text of all included studies was retrieved and uploaded in the Rayyan environment. Next, the full text of each included study was examined and relevant data were manually inputted in the predefined Excel sheet. Discrepancies were resolved by discussion. The following data types were extracted: (1) general study information (authors, title, year of publication, type of study, country or countries); (2) target disease(s), intervention, or clinical specialty; (3) used patient or staff experience evaluation instrument and measured experience construct; (4) evidence base, if indicated; and (5) number of involved staff or patient participants. By “construct,” we referred to the “abstract idea, underlying theme, or subject matter that one wishes to measure using survey questions” [ 28 ]. To identify the measured experience construct, we used the definition provided in the source contribution, whenever available.

Data Analysis

First, we analyzed the collected data through building general overviews depicting the kind of target participants (patients or staff) of the collected experience measures and their use over time. To organize the diverse set of results collected through the systematic review, we then performed a correspondence analysis (CA) [ 29 ], a multivariate statistical technique used for exploring and displaying relationships between categorical data. CA transforms a 2-way table of frequencies between a row and a column variable into a visual representation of relatedness between the variables. This relatedness is expressed in terms of inertia, which represents “a measure of deviation from independence” [ 30 ] between the row and column variables. Any deviations from the frequencies expected if the row and column variables were completely independent from each other contribute to the total inertia of the model. CA breaks down the inertia of the model by identifying mutually independent (orthogonal) dimensions on which the model inertia can be represented. Each successive dimension explains less and less of the total inertia of the model. On each dimension, relatedness is expressed in terms of the relative closeness of rows to each other, as well as the relative closeness of columns to each other. CA has been previously used to find patterns in systematic review data in the health care domain [ 31 ].

In our case, a 2-way table of frequencies was built based on how often any given instrument (eg, System Usability Scale [SUS]) was used to measure any given construct (eg, usability) in the included literature. Therefore, in our case, the total inertia of the model represented the amassed evidence base for relatedness between the collected experience constructs and measures, based on how they were used in the included literature.

To build the table of frequencies, the data extracted from the systematic review underwent a round of cleaning, in which the formulation of similar constructs was made more homogeneous: for instance, “time to review,” “time to response,” and “time for task” were merged under 1 label, “time effort.” An overview of the merged construct formulations is provided in Multimedia Appendix 3 . The result of the CA was a model where 2 dimensions contributed to more than 80% of the model’s inertia (explaining 44.8% and 35.7%, respectively) and where none of the remaining 59 dimensions contributed more than 7.3% to the remaining inertia. This gap suggests the first 2 dimensions to express meaningful relationships that are not purely based on random variation. A 2D solution was thus chosen.

General Observations

A total of 158 studies reporting at least 1 instance of patient or staff experience measuring in RPM were included in the review. The included studies covered a broad range of RPM interventions, most prominently diabetes care (n=30, 19%), implanted devices (n=12, 7.6%), and chronic obstructive pulmonary disease (COPD; n=10, 6.3%). From these studies, we reported 546 experience-measuring instances in RPM, covering 160 unique experience-measuring instruments used to measure 120 unique experience constructs.

Our results included 4 kinds of versatile (intended as nonspecific) experience-measuring instruments: the custom survey, log file analysis, protocol database analysis, and task analysis. All of them can be used for measuring disparate kinds of constructs:

  • By “custom survey,” we refer to survey instruments created to evaluate patient or staff experience in connection to 1 specific RPM study and only for that study.
  • By “log file analysis,” we refer to the set of experience assessment methods based on the automatic collection of data through the RPM digital infrastructures themselves [ 32 ]; examples are clicks, uploads, views, or other forms of interactions between users and the RPM digital system. This set of methods is typically used to estimate experience-relevant constructs, such as adherence and compliance.
  • By “protocol database analysis,” we refer to the set of experience assessment methods based on the manual collection of data performed by RPM researchers within a specific research protocol; an example of a construct measured with these instruments is the willingness to enroll.
  • By “task analysis,” we refer to the set of experience assessment methods based on the real-life observation of users interacting with the RPM system [ 33 ].

In addition to these 4 instruments, our results included a large number of specific instruments, such as standard indexes, surveys, and questionnaires. Overall, the most frequently reported instrument was, by far, the custom survey (reported in 155/546, 28.39%, instances), while the most frequently reported experience construct was satisfaction (85/546, 15.57%), closely followed by quality of life (71/546, 13%).

Target Participants and Timeline

We found large differences in the number of RPM-relevant experience constructs and instruments used for patients and for staff (see Figure 3 ). We also found instruments used for both patients and staff. Either these were broadly used instruments (eg, the SUS) that were administered to both patients and staff within the same study, or they were measures of interactions between patients and staff (eg, log file analysis instruments recording the number of remote contacts between patients and nursing assistants).

critical review of methodology

RPM research appears to focus much more on patient experience than on staff experience, which was investigated in only 20 (12.7%) of the 158 included papers. Although it is possible that our exclusion criteria contributed to the paucity of staff experience measures, only 2 (0.1%) of 2092 studies were excluded for reporting on interventions directed exclusively at staff. Of the 41 (2%) studies we excluded for reporting on primary care interventions, we found 6 (15%) studies reporting on staff experience, a rate comparable to the one in the included sample. Furthermore, although our choice to exclude papers reporting on the RPM experience of informal caregivers might have contributed to a reduction in the number of collected constructs and measures, only 2 (0.1%) of 2092 studies were excluded for this reason, and the constructs used in these contributions were not dissimilar from the ones found in the included literature.

Among the included contributions that did investigate staff experience, we noticed that the number of participant staff members involved in the reported studies was only reported in a minority of cases (9/20, 45%).

Furthermore, a time-based overview of the collected results ( Figure 4 ) provided us with an impression of the expansion of the field in the time frame of interest for both patient and staff experience measures.

critical review of methodology

Correspondence Analysis

The plotted results of the CA of experience constructs are shown in Figure 5 . Here, we discuss the outlook and interpretation of each dimension.

critical review of methodology

The first dimension explained more than 44% of the model’s inertia. The contributions of this dimension showed which constructs had the most impact in determining its orientation: satisfaction (36%) and to a lesser extent adherence (26%) and quality of life (17%). On the negative (left) side of this dimension, we found constructs such as satisfaction, perceptions, and acceptability, which are associated with subjective measures of patient and staff experience and relate to how people feel or think in relation to RPM interventions. On the positive (right) side of this dimension, we found constructs such as adherence, compliance, and quality of life, which are associated with objectivized measures of patient and staff experience. By “objectivized measures,” we referred to measures that are meant to capture phenomena in a factual manner, ideally independently from personal biases and subjective opinions. Adherence and compliance, particularly, are often measured through passive collection of system data (eg, log file analysis) that reflect objective measures of the way patients or staff interact with RPM propositions. Even in the case of (health-related) quality of life, which can include subjective connotations and components, measures usually aim at capturing an estimation of the factual impact of health status on a person’s overall life quality.

In this sense, we attributed a distinction between how people feel versus what happens experience constructs to this first dimension. We noted that a similar distinction (between subjective vs objective measures of engagement in remote measurement studies) was previously proposed as a meaningful differentiation to structure “a field impeded by incoherent measures” [ 27 ].

The second dimension explained 35% of the model’s inertia. The contributions of this dimension showed which constructs had the most impact in determining its orientation: quality of life (62%) and adherence (24%). On the negative (bottom) side of this dimension, we found constructs such as quality of life, depression, and anxiety, which are often used as experiential descriptors of health outcomes. On the positive (top) side of this dimension, we found adherence, compliance, and frequency, which are often used as descriptions of the interactions of patients or staff with a specific (RPM) system. Thus, we attributed a distinction between health-relevant versus system-relevant experience constructs to this second dimension.

Based on the results of CA, we proposed a categorization of patient and staff experience–related constructs into 4 partly overlapping clusters. Coherent with the offered explanation of the 2 dimensions and in consideration of the constructs found in each area, we labeled these as service system–related experience measures, care-related experience measures, usage- and adherence-related experience measures, and health outcome–related experience measures. In Figure 6 , we display the results of the CA labeled through this categorization. In this second visualization, we presented the results on a logarithmic scale to improve the visibility of constructs close to the center of the axes. Overall, this categorization of patient and staff experience constructs used in the RPM literature paints a landscape of the contemporary research in this field, which shows a mix of influences from clinical disciplines, health psychology, human factors engineering, service design, user research, systems engineering, and computer science.

critical review of methodology

A visualization of the reported patient experience constructs and some of the related measuring instruments, organized by the categories identified in the CA, is available in Figure 7 . A complete version of this visual can be found in Multimedia Appendix 4 , and an interactive version can be found in [ 34 ]. In this figure, we can note the limited crossovers between constructs belonging to different categories, with the exception of versatile instruments, such as custom survey and log file analysis.

critical review of methodology

Recommendations

In the light of the collected findings, here we provide a set of recommendations to RPM patient and staff experience evaluators, in terms of both what to measure and how to measure it ( Figure 8 ). Although these recommendations are functional to strengthen the quality of individual research protocols, they are also meant to stimulate increased standardization in the field as a whole.

critical review of methodology

Regarding what to measure, we provide 4 main recommendations. The first is to conduct structured evaluations of staff experience next to patient experience. Failing to evaluate staff experience leads to risks, such as undetected staff nonadherence, misuse, and overworking. Although new competencies need to be developed in order for staff to unlock the untapped potential of RPM [ 35 ], seamless integration with existing clinical workflows should always be pursued and monitored.

The second recommendation is to consider experience constructs in all 4 clusters indicated in Figure 6 , as these represent complementary facets of an overall experiential ensemble. Failing to do so exposes RPM evaluators to the risk of obtaining partial information (eg, only shedding light on how people feel but not on what happens in terms of patient and staff experience in RPM).

The third recommendation is to explicitly define and report a clear rationale regarding which aspects of patient and staff experience to prioritize in evaluations, depending on the goals and specificities of the RPM intervention. This rationale should ideally be informed by preliminary qualitative research and by a collaborative mapping of the expected relationships between patient and staff experience and other components of the Quadruple Aim framework for the RPM intervention at hand. Failing to follow this recommendation exposes RPM evaluators to the risk of obtaining results that are logically detached from each other and as such cannot inform organic improvement efforts. Virtuous examples of reporting a clear rationale were provided by Alonso-Solís et al [ 36 ] and den Bakker et al [ 37 ], who offered detailed accounts of the considerations used to guide the selection of included experience measures. Several existing frameworks and methods can be used to map such considerations, including the nonadoption, abandonment, scale-up, spread, and sustainability (NASSS) framework [ 38 ] and the logical framework [ 39 ]. A relatively lightweight method to achieve such an overview can also be represented by the use of Figure 1 as a checklist to inventory possible Quadruple Aim relationships for a specific RPM intervention.

The fourth recommendation is to routinely reassess the chosen set of experience measures after each iteration of the RPM intervention design. Initial assumptions regarding relationships between experience factors and other dimensions of intervention quality should be verified once the relevant data are available, and new ones should be formulated, if necessary. If the RPM intervention transitions from research stages to implementation as the standard of care, it is recommended to keep on collecting at least some basic experience measures for system quality monitoring and continuous improvement. Failing to update the set of collected measures as the RPM intervention progresses through successive development stages exposes RPM evaluators to the risk of collecting outdated information, hindering iterative improvement processes.

Regarding how to measure RPM patient and staff experience, we provide 2 main recommendations. The first is to work with existing, validated and widely used instruments as much as possible, only creating new instruments after a convincing critique against current ones. Figure 7 can be used to find existing instruments measuring a broad range of experience-relevant constructs so as to reduce the need to create new ones.

For instance, researchers interested in evaluating certain experience constructs, ideally informed by preliminary qualitative research, might consult the full version of Figure 7 (available in Multimedia Appendix 4 or as an interactive map in Ref. [ 34 ]) to find their construct of interest on the left side of the graph, follow the connecting lines to the existing relevant measures on the right, and identify the most frequently used ones. They can also use the visual to consider other possibly relevant constructs.

Alternatively, researchers can use the open access database of this review [ 40 ] and especially the “extracted data” Excel file to search for the construct of interest and find details of papers in the RPM domain in which the construct was previously measured.

Failing to follow this recommendation exposes RPM researchers to the risk of obtaining results that cannot be compared to meaningful benchmarks, compared to other RPM interventions, or be included in meta-analyses.

The second recommendation is to consider adopting automatic, “passive” methods of experience data collection, such as the ones we referred to in this review as log file analysis, so as to obtain actionable estimates of user behavior with a reduced need for patients and staff to fill tedious surveys [ 41 ] or otherwise provide active input. Failing to consider automatically collected log file data on patient and staff experience constitutes a missed opportunity in terms of both the quality and cost of evaluation data. We recognize such nascent data innovations as promising [ 42 ] but also in need of methodological definition, particularly in terms of an ethical evaluation of data privacy and access [ 43 , 44 ] in order to avoid exploitative forms of prosumption [ 45 ].

Principal Findings

This study resulted in a structured overview of patient and staff experience measures used in contemporary RPM research. Through this effort, we found that the research landscape has seen a sizeable growth in the past 10 years, that it is affected by a relative lack of focus on staff experience, and that the overall corpus of collected measures can be organized in 4 main categories (service system–related, care-related, usage- and adherence-related, and health outcome–related experience measures). Little to no consensus or standardization was found in the adopted methods. Based on these findings, a set of 6 actionable recommendations for RPM experience evaluators was provided, with the aim of improving the quality and standardization of experience-related RPM research. The results of this review align with and expand on recent contributions in the field, with particular regard to the work of White et al [ 27 ].

Directions for Further Research

Fruitful future research opportunities have been recognized in various areas of RPM experience measuring. Among them, we stress the need for comparative studies investigating patient and staff experience factors across different RPM interventions; for studies clarifying the use, potential, and limitations of log file analysis in this domain; and (most importantly) for studies examining the complex relationships between experience factors, health outcomes, and cost-effectiveness in RPM.

Ultimately, we recognize the need for integrated data strategies for RPM, intended as processes and rules that define how to manage, analyze, and act upon RPM data, including continuously collected experience data, as well as clinical, technical, and administrative data. Data strategies can represent a way to operationalize a systems approach to health care innovation, described by Komashie et al [ 46 ] as “a way of addressing health delivery challenges that recognizes the multiplicity of elements interacting to impact an outcome of interest and implements processes or tools in a holistic way.” As complex, adaptive, and partly automated systems, RPM interventions require sophisticated data strategies in order to function and improve [ 47 ]; continuous loops of system feedback need to be established and analyzed in order to monitor the impact of RPM systems and optimize their performance over time, while respecting patients’ and staff’s privacy. This is especially true in the case of RPM systems including artificial intelligence (AI) components, which require continuous monitoring and updating of algorithms [ 48 - 50 ]. We characterize the development of integrated, interdisciplinary data strategies as a paramount challenge in contemporary RPM research, which will require closer collaboration between digital health designers and health care professionals [ 51 - 53 ]. We hope to have provided a small contribution to this overall goal through our effort to structure the current landscape of RPM patient and staff experience evaluation.

Strengths and Limitations

We acknowledge both strengths and limitations of the chosen methodologies. The main strength of this review is its extensive focus, covering a large number of experience measures and RPM interventions. However, a limitation introduced by such a broad scope is the lack of differentiation by targeted condition, clinical specialty, RPM intervention characteristics, geographical area, or other relevant distinctions. Furthermore, limitations were introduced by choices, such as focusing exclusively on contributions in English and on nonprimary care and nonpediatric RPM interventions.

Contemporary patient and staff experience measuring in RPM is affected by a lack of consensus and standardization, affecting the quality of both primary and secondary research in this domain. This issue determines a critical knowledge gap in our understanding of the effectiveness of RPM interventions, which are known to bring about radical changes to the care experience of both patients and staff. Bridging this knowledge gap appears to be critical in a global context of urgent need for increased resource effectiveness across health care systems, including through the increased adoption of safe and effective RPM. In this context, this review offers support for RPM experience evaluators by providing a structured overview of contemporary patient and staff experience measures and a set of practical guidelines for improving research quality and standardization in this domain.

Acknowledgments

We gratefully acknowledge Jeroen Raijmakers, Francesca Marino, Lorena Hurtado Alvarez, Alexis Derumigny, and Laurens Schuurkamp for the help and advice provided in the context of this research.

Neither ChatGPT nor other generative language models were used in this research or in the manuscript preparation or review.

Data Availability

The data sets generated and analyzed during this review are available as open access in Ref. [ 40 ].

Authors' Contributions

VP conceived the study, performed the systematic review and data analysis, and was mainly responsible for the writing of the manuscript. HCMO collaborated on study design, performed independent screening of contributions, and collaborated on data analysis. RvK provided input to the study design and execution. PG supported query term selection and structuring. MK provided input on manuscript framing and positioning. DS provided input on the design, execution, and reporting of the correspondence analysis. All authors revised and made substantial contributions to the manuscript.

Conflicts of Interest

None declared.

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist.

Full search strategies.

Overview of the merged construct formulations .

Reported patient experience constructs and associated measuring instruments (complete visual).

  • da Farias FAC, Dagostini CM, Bicca YDA, Falavigna VF, Falavigna A. Remote patient monitoring: a systematic review. Telemed J E Health. May 17, 2020;26(5):576-583. [ CrossRef ] [ Medline ]
  • Taiwo O, Ezugwu AE. Smart healthcare support for remote patient monitoring during COVID-19 quarantine. Inform Med Unlocked. 2020;20:100428. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fagherazzi G, Goetzinger C, Rashid MA, Aguayo GA, Huiart L. Digital health strategies to fight COVID-19 worldwide: challenges, recommendations, and a call for papers. J Med Internet Res. Jun 16, 2020;22(6):e19284. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Peek N, Sujan M, Scott P. Digital health and care in pandemic times: impact of COVID-19. BMJ Health Care Inform. Jun 21, 2020;27(1):e100166. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sust PP, Solans O, Fajardo JC, Peralta MM, Rodenas P, Gabaldà J, et al. Turning the crisis into an opportunity: digital health strategies deployed during the COVID-19 outbreak. JMIR Public Health Surveill. May 04, 2020;6(2):e19106. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vegesna A, Tran M, Angelaccio M, Arcona S. Remote patient monitoring via non-invasive digital technologies: a systematic review. Telemed J E Health. Jan 2017;23(1):3-17. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Noah B, Keller MS, Mosadeghi S, Stein L, Johl S, Delshad S, et al. Impact of remote patient monitoring on clinical outcomes: an updated meta-analysis of randomized controlled trials. NPJ Digit Med. Jan 15, 2018;1(1):20172. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Majumder S, Mondal T, Deen M. Wearable sensors for remote health monitoring. Sensors (Basel). Jan 12, 2017;17(1):130. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Coye MJ, Haselkorn A, DeMello S. Remote patient management: technology-enabled innovation and evolving business models for chronic disease care. Health Aff (Millwood). Jan 2009;28(1):126-135. [ CrossRef ] [ Medline ]
  • Schütz N, Knobel SEJ, Botros A, Single M, Pais B, Santschi V, et al. A systems approach towards remote health-monitoring in older adults: introducing a zero-interaction digital exhaust. NPJ Digit Med. Aug 16, 2022;5(1):116. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Drennan VM, Ross F. Global nurse shortages—the facts, the impact and action for change. Br Med Bull. Jun 19, 2019;130(1):25-37. [ CrossRef ] [ Medline ]
  • Global Burden of Disease Health Financing Collaborator Network. Past, present, and future of global health financing: a review of development assistance, government, out-of-pocket, and other private spending on health for 195 countries, 1995-2050. Lancet. Jun 01, 2019;393(10187):2233-2260. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mecklai K, Smith N, Stern AD, Kramer DB. Remote patient monitoring — overdue or overused? N Engl J Med. Apr 15, 2021;384(15):1384-1386. [ CrossRef ]
  • León MA, Pannunzio V, Kleinsmann M. The impact of perioperative remote patient monitoring on clinical staff workflows: scoping review. JMIR Hum Factors. Jun 06, 2022;9(2):e37204. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mair F, Whitten P. Systematic review of studies of patient satisfaction with telemedicine. BMJ. Jun 03, 2000;320(7248):1517-1520. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kraai I, Luttik M, de Jong R, Jaarsma T, Hillege H. Heart failure patients monitored with telemedicine: patient satisfaction, a review of the literature. J Card Fail. Aug 2011;17(8):684-690. [ CrossRef ] [ Medline ]
  • Wolf JA, Niederhauser V, Marshburn D, LaVela SL. Reexamining “defining patient experience”: the human experience in healthcare. Patient Exp J. Apr 28, 2021;8(1):16-29. [ CrossRef ]
  • Lavela S, Gallan A. Evaluation and measurement of patient experience. Patient Exp J. Apr 1, 2014;1(1):28-36. [ CrossRef ]
  • Wang T, Giunti G, Melles M, Goossens R. Digital patient experience: umbrella systematic review. J Med Internet Res. Aug 04, 2022;24(8):e37952. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zakkar M. Patient experience: determinants and manifestations. IJHG. May 22, 2019;24(2):143-154. [ CrossRef ]
  • Sikka R, Morath JM, Leape L. The quadruple aim: care, health, cost and meaning in work. BMJ Qual Saf. Oct 02, 2015;24(10):608-610. [ CrossRef ] [ Medline ]
  • Pannunzio V, Kleinsmann M, Snelders H. Design research, eHealth, and the convergence revolution. arXiv:1909.08398v1 [cs.HC] preprint posted online 2019. [doi: 10.48550/arXiv.1909.08398]. [ CrossRef ]
  • Thomas EE, Taylor ML, Banbury A, Snoswell CL, Haydon HM, Gallegos Rejas VM, et al. Factors influencing the effectiveness of remote patient monitoring interventions: a realist review. BMJ Open. Aug 25, 2021;11(8):e051844. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sendelbach S, Funk M. Alarm fatigue: a patient safety concern. AACN Adv Crit Care. 2013;24(4):378-386. [ CrossRef ]
  • Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. Mar 29, 2021;372:n71. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Rev. Dec 05, 2016;5(1):210. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • White KM, Williamson C, Bergou N, Oetzmann C, de Angel V, Matcham F, et al. A systematic review of engagement reporting in remote measurement studies for health symptom tracking. NPJ Digit Med. Jun 29, 2022;5(1):82. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Dew D. Construct. In: Lavrakas PJ, editor. Encyclopedia of Survey Research Methods. Thousand Oaks, CA. SAGE Publications; 2008;134.
  • Greenacre MJ. Correspondence Analysis in the Social Sciences: Recent Developments and Applications. San Diego, CA. Academic Press; 1999.
  • Sourial N, Wolfson C, Zhu B, Quail J, Fletcher J, Karunananthan S, et al. Erratum to “Correspondence analysis is a useful tool to uncover the relationships among categorical variables” [J Clin Epidemiol 2010;63:638-646]. J Clin Epidemiol. Jul 2010;63(7):809. [ CrossRef ]
  • Franceschi VB, Santos AS, Glaeser AB, Paiz JC, Caldana GD, Machado Lessa CL, et al. Population-based prevalence surveys during the COVID-19 pandemic: a systematic review. Rev Med Virol. Jul 04, 2021;31(4):e2200. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Huerta T, Fareed N, Hefner JL, Sieck CJ, Swoboda C, Taylor R, et al. Patient engagement as measured by inpatient portal use: methodology for log file analysis. J Med Internet Res. Mar 25, 2019;21(3):e10957. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Diaper D, Stanton N. The Handbook of Task Analysis for Human-Computer Interaction. Boca Raton, FL. CRC Press; 2003.
  • Interactive Sankey. Adobe. URL: https://indd.adobe.com/view/d66b2b4c-463c-4b39-8934-ac0282472224 [accessed 2024-03-25]
  • Hilty DM, Armstrong CM, Edwards-Stewart A, Gentry MT, Luxton DD, Krupinski EA. Sensor, wearable, and remote patient monitoring competencies for clinical care and training: scoping review. J Technol Behav Sci. 2021;6(2):252-277. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Alonso-Solís A, Rubinstein K, Corripio I, Jaaskelainen E, Seppälä A, Vella VA, m-Resist Group, et al. Mobile therapeutic attention for treatment-resistant schizophrenia (m-RESIST): a prospective multicentre feasibility study protocol in patients and their caregivers. BMJ Open. Jul 16, 2018;8(7):e021346. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • den Bakker CM, Schaafsma FG, van der Meij E, Meijerink WJ, van den Heuvel B, Baan AH, et al. Electronic health program to empower patients in returning to normal activities after general surgical and gynecological procedures: intervention mapping as a useful method for further development. J Med Internet Res. Feb 06, 2019;21(2):e9938. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A'Court C, et al. Beyond adoption: a new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J Med Internet Res. Nov 01, 2017;19(11):e367. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Dey P, Hariharan S, Brookes N. Managing healthcare quality using logical framework analysis. Manag Serv Qual. Mar 1, 2006;16(2):203-222. [ CrossRef ]
  • Pannunzio V, Ornelas HM. Data of article "Patient and staff experience evaluation in remote patient monitoring; what to measure and how? A systematic review". Version 1. Dataset. 4TU.ResearchData. URL: https://data.4tu.nl/articles/_/21930783/1 [accessed 2024-03-25]
  • de Koning R, Egiz A, Kotecha J, Ciuculete AC, Ooi SZY, Bankole NDA, et al. Survey fatigue during the COVID-19 pandemic: an analysis of neurosurgery survey response rates. Front Surg. Aug 12, 2021;8:690680. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Miriovsky BJ, Shulman LN, Abernethy AP. Importance of health information technology, electronic health records, and continuously aggregating data to comparative effectiveness research and learning health care. J Clin Oncol. Dec 01, 2012;30(34):4243-4248. [ CrossRef ] [ Medline ]
  • Fernández-Alemán JL, Señor IC, Lozoya PÁO, Toval A. Security and privacy in electronic health records: a systematic literature review. J Biomed Inform. Jun 2013;46(3):541-562. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Martínez-Pérez B, de la Torre-Díez I, López-Coronado M. Privacy and security in mobile health apps: a review and recommendations. J Med Syst. Jan 2015;39(1):181. [ CrossRef ] [ Medline ]
  • Lupton D. The commodification of patient opinion: the digital patient experience economy in the age of big data. Sociol Health Illn. Jul 01, 2014;36(6):856-869. [ CrossRef ] [ Medline ]
  • Komashie A, Ward J, Bashford T, Dickerson T, Kaya GK, Liu Y, et al. Systems approach to health service design, delivery and improvement: a systematic review and meta-analysis. BMJ Open. Jan 19, 2021;11(1):e037667. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Abdolkhani R, Gray K, Borda A, DeSouza R. Patient-generated health data management and quality challenges in remote patient monitoring. JAMIA Open. Dec 2019;2(4):471-478. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Feng J, Phillips RV, Malenica I, Bishara A, Hubbard AE, Celi LA, et al. Clinical artificial intelligence quality improvement: towards continual monitoring and updating of AI algorithms in healthcare. NPJ Digit Med. May 31, 2022;5(1):66. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gerke S, Babic B, Evgeniou T, Cohen IG. The need for a system view to regulate artificial intelligence/machine learning-based software as medical device. NPJ Digit Med. Apr 07, 2020;3(1):53. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • de Hond AAH, Leeuwenberg AM, Hooft L, Kant IMJ, Nijman SWJ, van Os HJA, et al. Guidelines and quality criteria for artificial intelligence-based prediction models in healthcare: a scoping review. NPJ Digit Med. Jan 10, 2022;5(1):2. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Pannunzio V. Towards a convergent approach to the use of data in digital health design. Dissertation, Delft University of Technology. 2023. URL: https://tinyurl.com/4ah5tvw6 [accessed 2024-03-25]
  • Morales Ornelas HC, Kleinsmann M, Kortuem G. Exploring health and design evidence practices in eHealth systems’ development. 2023. Presented at: ICED23: International Conference on Engineering Design; July 2023;1795-1804; Bordeaux, France. [ CrossRef ]
  • Morales OH, Kleinsmann M, Kortuem G. Towards designing for health outcomes: implications for designers in eHealth design. In: Forthcoming. 2024. Presented at: DESIGN2024: International Design Conference; May 2024; Cavtat, Croatia.

Abbreviations

Edited by T de Azevedo Cardoso; submitted 25.04.23; peer-reviewed by M Tai-Seale, C Nöthiger, M Gasmi ; comments to author 29.07.23; revised version received 25.08.23; accepted 20.02.24; published 22.04.24.

©Valeria Pannunzio, Hosana Cristina Morales Ornelas, Pema Gurung, Robert van Kooten, Dirk Snelders, Hendrikus van Os, Michel Wouters, Rob Tollenaar, Douwe Atsma, Maaike Kleinsmann. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 22.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Can Med Educ J
  • v.12(3); 2021 Jun

Logo of cmej

Writing, reading, and critiquing reviews

Écrire, lire et revue critique, douglas archibald.

1 University of Ottawa, Ontario, Canada;

Maria Athina Martimianakis

2 University of Toronto, Ontario, Canada

Why reviews matter

What do all authors of the CMEJ have in common? For that matter what do all health professions education scholars have in common? We all engage with literature. When you have an idea or question the first thing you do is find out what has been published on the topic of interest. Literature reviews are foundational to any study. They describe what is known about given topic and lead us to identify a knowledge gap to study. All reviews require authors to be able accurately summarize, synthesize, interpret and even critique the research literature. 1 , 2 In fact, for this editorial we have had to review the literature on reviews . Knowledge and evidence are expanding in our field of health professions education at an ever increasing rate and so to help keep pace, well written reviews are essential. Though reviews may be difficult to write, they will always be read. In this editorial we survey the various forms review articles can take. As well we want to provide authors and reviewers at CMEJ with some guidance and resources to be able write and/or review a review article.

What are the types of reviews conducted in Health Professions Education?

Health professions education attracts scholars from across disciplines and professions. For this reason, there are numerous ways to conduct reviews and it is important to familiarize oneself with these different forms to be able to effectively situate your work and write a compelling rationale for choosing your review methodology. 1 , 2 To do this, authors must contend with an ever-increasing lexicon of review type articles. In 2009 Grant and colleagues conducted a typology of reviews to aid readers makes sense of the different review types, listing fourteen different ways of conducting reviews, not all of which are mutually exclusive. 3 Interestingly, in their typology they did not include narrative reviews which are often used by authors in health professions education. In Table 1 , we offer a short description of three common types of review articles submitted to CMEJ.

Three common types of review articles submitted to CMEJ

More recently, authors such as Greenhalgh 4 have drawn attention to the perceived hierarchy of systematic reviews over scoping and narrative reviews. Like Greenhalgh, 4 we argue that systematic reviews are not to be seen as the gold standard of all reviews. Instead, it is important to align the method of review to what the authors hope to achieve, and pursue the review rigorously, according to the tenets of the chosen review type. Sometimes it is helpful to read part of the literature on your topic before deciding on a methodology for organizing and assessing its usefulness. Importantly, whether you are conducting a review or reading reviews, appreciating the differences between different types of reviews can also help you weigh the author’s interpretation of their findings.

In the next section we summarize some general tips for conducting successful reviews.

How to write and review a review article

In 2016 David Cook wrote an editorial for Medical Education on tips for a great review article. 13 These tips are excellent suggestions for all types of articles you are considering to submit to the CMEJ. First, start with a clear question: focused or more general depending on the type of review you are conducting. Systematic reviews tend to address very focused questions often summarizing the evidence of your topic. Other types of reviews tend to have broader questions and are more exploratory in nature.

Following your question, choose an approach and plan your methods to match your question…just like you would for a research study. Fortunately, there are guidelines for many types of reviews. As Cook points out the most important consideration is to be sure that the methods you follow lead to a defensible answer to your review question. To help you prepare for a defensible answer there are many guides available. For systematic reviews consult PRISMA guidelines ; 13 for scoping reviews PRISMA-ScR ; 14 and SANRA 15 for narrative reviews. It is also important to explain to readers why you have chosen to conduct a review. You may be introducing a new way for addressing an old problem, drawing links across literatures, filling in gaps in our knowledge about a phenomenon or educational practice. Cook refers to this as setting the stage. Linking back to the literature is important. In systematic reviews for example, you must be clear in explaining how your review builds on existing literature and previous reviews. This is your opportunity to be critical. What are the gaps and limitations of previous reviews? So, how will your systematic review resolve the shortcomings of previous work? In other types of reviews, such as narrative reviews, its less about filling a specific knowledge gap, and more about generating new research topic areas, exposing blind spots in our thinking, or making creative new links across issues. Whatever, type of review paper you are working on, the next steps are ones that can be applied to any scholarly writing. Be clear and offer insight. What is your main message? A review is more than just listing studies or referencing literature on your topic. Lead your readers to a convincing message. Provide commentary and interpretation for the studies in your review that will help you to inform your conclusions. For systematic reviews, Cook’s final tip is most likely the most important– report completely. You need to explain all your methods and report enough detail that readers can verify the main findings of each study you review. The most common reasons CMEJ reviewers recommend to decline a review article is because authors do not follow these last tips. In these instances authors do not provide the readers with enough detail to substantiate their interpretations or the message is not clear. Our recommendation for writing a great review is to ensure you have followed the previous tips and to have colleagues read over your paper to ensure you have provided a clear, detailed description and interpretation.

Finally, we leave you with some resources to guide your review writing. 3 , 7 , 8 , 10 , 11 , 16 , 17 We look forward to seeing your future work. One thing is certain, a better appreciation of what different reviews provide to the field will contribute to more purposeful exploration of the literature and better manuscript writing in general.

In this issue we present many interesting and worthwhile papers, two of which are, in fact, reviews.

Major Contributions

A chance for reform: the environmental impact of travel for general surgery residency interviews by Fung et al. 18 estimated the CO 2 emissions associated with traveling for residency position interviews. Due to the high emissions levels (mean 1.82 tonnes per applicant), they called for the consideration of alternative options such as videoconference interviews.

Understanding community family medicine preceptors’ involvement in educational scholarship: perceptions, influencing factors and promising areas for action by Ward and team 19 identified barriers, enablers, and opportunities to grow educational scholarship at community-based teaching sites. They discovered a growing interest in educational scholarship among community-based family medicine preceptors and hope the identification of successful processes will be beneficial for other community-based Family Medicine preceptors.

Exploring the global impact of the COVID-19 pandemic on medical education: an international cross-sectional study of medical learners by Allison Brown and team 20 studied the impact of COVID-19 on medical learners around the world. There were different concerns depending on the levels of training, such as residents’ concerns with career timeline compared to trainees’ concerns with the quality of learning. Overall, the learners negatively perceived the disruption at all levels and geographic regions.

The impact of local health professions education grants: is it worth the investment? by Susan Humphrey-Murto and co-authors 21 considered factors that lead to the publication of studies supported by local medical education grants. They identified several factors associated with publication success, including previous oral or poster presentations. They hope their results will be valuable for Canadian centres with local grant programs.

Exploring the impact of the COVID-19 pandemic on medical learner wellness: a needs assessment for the development of learner wellness interventions by Stephana Cherak and team 22 studied learner-wellness in various training environments disrupted by the pandemic. They reported a negative impact on learner wellness at all stages of training. Their results can benefit the development of future wellness interventions.

Program directors’ reflections on national policy change in medical education: insights on decision-making, accreditation, and the CanMEDS framework by Dore, Bogie, et al. 23 invited program directors to reflect on the introduction of the CanMEDS framework into Canadian postgraduate medical education programs. Their survey revealed that while program directors (PDs) recognized the necessity of the accreditation process, they did not feel they had a voice when the change occurred. The authors concluded that collaborations with PDs would lead to more successful outcomes.

Experiential learning, collaboration and reflection: key ingredients in longitudinal faculty development by Laura Farrell and team 24 stressed several elements for effective longitudinal faculty development (LFD) initiatives. They found that participants benefited from a supportive and collaborative environment while trying to learn a new skill or concept.

Brief Reports

The effect of COVID-19 on medical students’ education and wellbeing: a cross-sectional survey by Stephanie Thibaudeau and team 25 assessed the impact of COVID-19 on medical students. They reported an overall perceived negative impact, including increased depressive symptoms, increased anxiety, and reduced quality of education.

In Do PGY-1 residents in Emergency Medicine have enough experiences in resuscitations and other clinical procedures to meet the requirements of a Competence by Design curriculum? Meshkat and co-authors 26 recorded the number of adult medical resuscitations and clinical procedures completed by PGY1 Fellow of the Royal College of Physicians in Emergency Medicine residents to compare them to the Competence by Design requirements. Their study underscored the importance of monitoring collection against pre-set targets. They concluded that residency program curricula should be regularly reviewed to allow for adequate clinical experiences.

Rehearsal simulation for antenatal consults by Anita Cheng and team 27 studied whether rehearsal simulation for antenatal consults helped residents prepare for difficult conversations with parents expecting complications with their baby before birth. They found that while rehearsal simulation improved residents’ confidence and communication techniques, it did not prepare them for unexpected parent responses.

Review Papers and Meta-Analyses

Peer support programs in the fields of medicine and nursing: a systematic search and narrative review by Haykal and co-authors 28 described and evaluated peer support programs in the medical field published in the literature. They found numerous diverse programs and concluded that including a variety of delivery methods to meet the needs of all participants is a key aspect for future peer-support initiatives.

Towards competency-based medical education in addictions psychiatry: a systematic review by Bahji et al. 6 identified addiction interventions to build competency for psychiatry residents and fellows. They found that current psychiatry entrustable professional activities need to be better identified and evaluated to ensure sustained competence in addictions.

Six ways to get a grip on leveraging the expertise of Instructional Design and Technology professionals by Chen and Kleinheksel 29 provided ways to improve technology implementation by clarifying the role that Instructional Design and Technology professionals can play in technology initiatives and technology-enhanced learning. They concluded that a strong collaboration is to the benefit of both the learners and their future patients.

In his article, Seven ways to get a grip on running a successful promotions process, 30 Simon Field provided guidelines for maximizing opportunities for successful promotion experiences. His seven tips included creating a rubric for both self-assessment of likeliness of success and adjudication by the committee.

Six ways to get a grip on your first health education leadership role by Stasiuk and Scott 31 provided tips for considering a health education leadership position. They advised readers to be intentional and methodical in accepting or rejecting positions.

Re-examining the value proposition for Competency-Based Medical Education by Dagnone and team 32 described the excitement and controversy surrounding the implementation of competency-based medical education (CBME) by Canadian postgraduate training programs. They proposed observing which elements of CBME had a positive impact on various outcomes.

You Should Try This

In their work, Interprofessional culinary education workshops at the University of Saskatchewan, Lieffers et al. 33 described the implementation of interprofessional culinary education workshops that were designed to provide health professions students with an experiential and cooperative learning experience while learning about important topics in nutrition. They reported an enthusiastic response and cooperation among students from different health professional programs.

In their article, Physiotherapist-led musculoskeletal education: an innovative approach to teach medical students musculoskeletal assessment techniques, Boulila and team 34 described the implementation of physiotherapist-led workshops, whether the workshops increased medical students’ musculoskeletal knowledge, and if they increased confidence in assessment techniques.

Instagram as a virtual art display for medical students by Karly Pippitt and team 35 used social media as a platform for showcasing artwork done by first-year medical students. They described this shift to online learning due to COVID-19. Using Instagram was cost-saving and widely accessible. They intend to continue with both online and in-person displays in the future.

Adapting clinical skills volunteer patient recruitment and retention during COVID-19 by Nazerali-Maitland et al. 36 proposed a SLIM-COVID framework as a solution to the problem of dwindling volunteer patients due to COVID-19. Their framework is intended to provide actionable solutions to recruit and engage volunteers in a challenging environment.

In Quick Response codes for virtual learner evaluation of teaching and attendance monitoring, Roxana Mo and co-authors 37 used Quick Response (QR) codes to monitor attendance and obtain evaluations for virtual teaching sessions. They found QR codes valuable for quick and simple feedback that could be used for many educational applications.

In Creation and implementation of the Ottawa Handbook of Emergency Medicine Kaitlin Endres and team 38 described the creation of a handbook they made as an academic resource for medical students as they shift to clerkship. It includes relevant content encountered in Emergency Medicine. While they intended it for medical students, they also see its value for nurses, paramedics, and other medical professionals.

Commentary and Opinions

The alarming situation of medical student mental health by D’Eon and team 39 appealed to medical education leaders to respond to the high numbers of mental health concerns among medical students. They urged leaders to address the underlying problems, such as the excessive demands of the curriculum.

In the shadows: medical student clinical observerships and career exploration in the face of COVID-19 by Law and co-authors 40 offered potential solutions to replace in-person shadowing that has been disrupted due to the COVID-19 pandemic. They hope the alternatives such as virtual shadowing will close the gap in learning caused by the pandemic.

Letters to the Editor

Canadian Federation of Medical Students' response to “ The alarming situation of medical student mental health” King et al. 41 on behalf of the Canadian Federation of Medical Students (CFMS) responded to the commentary by D’Eon and team 39 on medical students' mental health. King called upon the medical education community to join the CFMS in its commitment to improving medical student wellbeing.

Re: “Development of a medical education podcast in obstetrics and gynecology” 42 was written by Kirubarajan in response to the article by Development of a medical education podcast in obstetrics and gynecology by Black and team. 43 Kirubarajan applauded the development of the podcast to meet a need in medical education, and suggested potential future topics such as interventions to prevent learner burnout.

Response to “First year medical student experiences with a clinical skills seminar emphasizing sexual and gender minority population complexity” by Kumar and Hassan 44 acknowledged the previously published article by Biro et al. 45 that explored limitations in medical training for the LGBTQ2S community. However, Kumar and Hassen advocated for further progress and reform for medical training to address the health requirements for sexual and gender minorities.

In her letter, Journey to the unknown: road closed!, 46 Rosemary Pawliuk responded to the article, Journey into the unknown: considering the international medical graduate perspective on the road to Canadian residency during the COVID-19 pandemic, by Gutman et al. 47 Pawliuk agreed that international medical students (IMGs) do not have adequate formal representation when it comes to residency training decisions. Therefore, Pawliuk challenged health organizations to make changes to give a voice in decision-making to the organizations representing IMGs.

In Connections, 48 Sara Guzman created a digital painting to portray her approach to learning. Her image of a hand touching a neuron showed her desire to physically see and touch an active neuron in order to further understand the brain and its connections.

TrendyDigests

TrendyDigests

Lockheed Martin and Boeing Compete for Air Force's Next-Generation Contract in 2024 Fighter Jet Showdown

Posted: April 23, 2024 | Last updated: April 23, 2024

<p>While it's uncertain which nation will achieve this milestone first,  the US will strive to be the first to produce a sixth-generation fighter.</p>  <p>related images you might be interested.</p>

Lockheed Martin and Boeing are poised for a head-to-head competition as the service prepares to award the contract for the Next Generation Air Dominance (NGAD) platform. The NGAD is envisioned as a family of systems comprising a crewed sixth-generation fighter aircraft, drone wingmen also referred to as Collaborative Combat Aircraft (CCA), advanced sensor capabilities, and superior network connectivity to satellites and other assets.

<p>The F-22's confrontation with the Typhoon was part of the Red Flag exercises held over Alaska, where pilots undergo rigorous aerial combat training against realistic threats.German Eurofighter pilots claimed a symbolic victory over their American counterparts, although these dogfights were simulated, the German pilots took them very seriously, with one of them saying, “they had a Raptor salad for lunch.” Despite the F-22’s stealth, thrust vectoring capabilities, and advanced sensor fusion, it faced formidable competition from the less stealthy, yet highly maneuverable Typhoon.</p>

The classified solicitation for NGAD’s engineering and manufacturing development contract was initiated in May 2023, signaling the formal commencement of the selection process. As a futuristic endeavor intended to replace the venerable F-22 Raptor, the Air Force aims for the NGAD to be operational by the decade's end. Emphasizing on open-architecture standards, the NGAD is designed to exploit competitive dynamics over its life cycle and curtail maintenance and support expenditures.

critical review of methodology

The NGAD program’s intricate details have been tightly guarded due to security concerns. Notably, in a significant industry shift, Northrop Grumman withdrew from the NGAD competition in 2023, focusing instead on the Navy’s variant, dubbed the F/A-XX. Northrop's CEO, Kathy Warden, indicated the company's strategy in a July earnings call. Consequently, the Air Force's procurement path is projected to see Lockheed Martin and Boeing as the main contenders.

<p>The NGAD's propulsion technology, referred to as Next Generation Adaptive Propulsion (NGAP), is another focal area with the Air Force planning a substantial budgetary injection in 2024. NGAP, with its adaptive capabilities, is positioned to transition rapidly to the optimal engine configuration for varied flight conditions. Featuring advanced composites capable of enduring high temperatures, NGAP draws on research initially considered for the F-35. The substantial investment increase in NGAP, to the tune of $595 million requested for the fiscal year 2024, underscores its centrality to NGAD’s performance.</p>

The NGAD's propulsion technology, referred to as Next Generation Adaptive Propulsion (NGAP), is another focal area with the Air Force planning a substantial budgetary injection in 2024. NGAP, with its adaptive capabilities, is positioned to transition rapidly to the optimal engine configuration for varied flight conditions. Featuring advanced composites capable of enduring high temperatures, NGAP draws on research initially considered for the F-35. The substantial investment increase in NGAP, to the tune of $595 million requested for the fiscal year 2024, underscores its centrality to NGAD’s performance.

<p>In parallel, Pratt & Whitney, an RTX subsidiary, has achieved a milestone with the Air Force's critical design review for its NGAD engine. The prototypical XA103 engine is on track for ground testing in the late 2020s, showcasing Pratt & Whitney's commitment to advancing sixth-generation propulsion. The propulsion innovation entailed in NGAD reflects a broader trend toward maintaining air superiority and ensuring the U.S. maintains its competitive edge in aerospace and defense technology.</p>  <p>related images you might be interested.</p>

In parallel, Pratt & Whitney, an RTX subsidiary, has achieved a milestone with the Air Force's critical design review for its NGAD engine. The prototypical XA103 engine is on track for ground testing in the late 2020s, showcasing Pratt & Whitney's commitment to advancing sixth-generation propulsion. The propulsion innovation entailed in NGAD reflects a broader trend toward maintaining air superiority and ensuring the U.S. maintains its competitive edge in aerospace and defense technology.

related images you might be interested.

critical review of methodology

More for You

Eva Evans was celebrated for her New York tips and for her recent comedy series Club Rat

NYC TikTok star Eva Evans dies aged 29

50 best Western TV shows of all time

The best Western show in TV history isn't 'Gunsmoke' or 'Bonanza,' according to data. Check out the top 50.

Carry Cash

I’m a Bank Teller: 3 Times You Should Never Ask For $100 Bills at the Bank

The Quest for the Best Fast-Food Breakfast

We Ordered 7 Fast-Food Breakfast Sandwiches to Find the Best One

Arj Barker

Comedian defends decision to kick ‘breastfeeding’ mother and baby out of show

20 Funny Examples of ‘If It’s Stupid and It Works, It’s Not Stupid’

18 Funny Examples of ‘If It’s Stupid and It Works, It’s Not Stupid’

Books everyone should read

100 Best Books of All Time

Getty (Photo: Getty)

Michael Douglas Defends Joe Biden With A Stark Reminder About Donald Trump

Snacks and other food items banned in the US

30 food items that you might not know are banned in America

A photo of Mandisa

'American Idol' alum Mandisa death at 47 follows life of struggles, faith, inspiration

pile of us coins dimes_iStock-1403141035

Barber Coins Are Worth Thousands: Here’s How To Spot Them in Your Spare Change

Costco food court customers

The Costco Food Court Pizza Trick We Should Have Tried Sooner

Homebuyers call out Ramsey's 'unrealistic' advice

'You don't get a pass on math': Homebuyers call out Dave Ramsey's 'unrealistic' mortgage advice. Are they right?

Movie Miscasts: 15 Times the Wrong Actor Was Chosen for a Role

Movie Miscasts: 15 Times the Wrong Actor Was Chosen for a Role

Courtesy Clarissa Wei

Opinion: I’d rather live in the ‘world’s most dangerous place’ than America

When Are the Cicadas Coming? They’re Already Here!

Here’s When to Expect Cicadas If You Live in One of These 17 States

Mike Johnson

Republican Infighting Is Getting Uglier

Tyreek Hill #10 of the Miami Dolphins and AFC looks on during the 2024 NFL Pro Bowl at Camping World Stadium on February 04, 2024 in Orlando, Florida.

Dolphins' Tyreek Hill recalls Mike McDaniel's tongue lashing: 'We pay you all this money for what?'

Israel hit Iran with a half-ton supersonic 'Rampage' missile, report says

Israel hit Iran with a half-ton supersonic 'Rampage' missile, report says

110 monumental movies from film history and why you need to see them

The films everyone should see at least once before they die, according to critics

COMMENTS

  1. Writing Critical Reviews: A Step-by-Step Guide

    Ev en better you might. consider doing an argument map (see Chapter 9, Critical thinking). Step 5: Put the article aside and think about what you have read. Good critical review. writing requires ...

  2. Methodology or method? A critical review of qualitative case study

    The critical review method described by Grant and Booth was used, which is appropriate for the assessment of research quality, and is used for literature analysis to inform research and practice. This type of review goes beyond the mapping and description of scoping or rapid reviews, to include "analysis and conceptual innovation" (Grant ...

  3. Critical Analysis: The Often-Missing Step in Conducting Literature

    The research process for conducting a critical analysis literature review has three phases ; (a) the deconstruction phase in which the individually reviewed studies are broken down into separate discreet data points or variables (e.g., breastfeeding duration, study design, sampling methods); (b) the analysis phase that includes both cross-case ...

  4. Methodological Approaches to Literature Review

    A literature review is defined as "a critical analysis of a segment of a published body of knowledge through summary, classification, and comparison of prior research studies, reviews of literature, and theoretical articles." (The Writing Center University of Winconsin-Madison 2022) A literature review is an integrated analysis, not just a summary of scholarly work on a specific topic.

  5. How to Write Critical Reviews

    To write a good critical review, you will have to engage in the mental processes of analyzing (taking apart) the work-deciding what its major components are and determining how these parts (i.e., paragraphs, sections, or chapters) contribute to the work as a whole. Analyzing the work will help you focus on how and why the author makes certain ...

  6. Critically reviewing literature: A tutorial for new researchers

    A critical review is a detailed analysis and assessment of the strengths and weaknesses of the ideas and information in written text. Research students who propose a "conceptual" paper (i.e. a paper with no empirical data) as their first publication will soon find that the contribution(s) and publication success of conceptual papers often ...

  7. PDF Planning and writing a critical review

    The simplest way to structure a critical review is to write a paragraph or two about each section of the study in turn. Within your discussion of each section, you should first sum up the main points such as the key findings, or methodology used, to show your understanding. After this, you could present the strengths and weaknesses, as you see ...

  8. PDF What is a Critical Review?

    The purpose of the critical review is to review or critically evaluate an article or book. What is meant by 'critical'? To be critical means that you are required to: • ask questions about the ideas and information presented in the text and; • to comment thoughtfully by engaging in a process of evaluating or;

  9. Scoping reviews: reinforcing and advancing the methodology and

    Scoping reviews are an increasingly common approach to evidence synthesis with a growing suite of methodological guidance and resources to assist review authors with their planning, conduct and reporting. The latest guidance for scoping reviews includes the JBI methodology and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses—Extension for Scoping Reviews.

  10. Literature review as a research methodology: An ...

    As mentioned previously, there are a number of existing guidelines for literature reviews. Depending on the methodology needed to achieve the purpose of the review, all types can be helpful and appropriate to reach a specific goal (for examples, please see Table 1).These approaches can be qualitative, quantitative, or have a mixed design depending on the phase of the review.

  11. LibGuides: PSY290

    Good Summary: Hock, S., & Rochford, R. A. (2010). A letter-writing campaign: linking academic success and civic engagement. Journal of Community Engagement and Scholarship, 3(2), 76-82.. Hock & Rochford (2010) describe how two classes of developmental writing students were engaged in a service-learning project to support the preservation of an on-campus historical site.

  12. Parts of a Critical Review

    To assert the article's practical and theoretical significance. In general, the conclusion of your critical review should include. A restatement of your overall opinion. A summary of the key strengths and weaknesses of the research that support your overall opinion of the source. An evaluation of the significance or success of the research.

  13. Chapter 9 Methods for Literature Reviews

    Literature reviews play a critical role in scholarship because science remains, first and foremost, a cumulative endeavour (vom Brocke et al., 2009). As in any academic discipline, rigorous knowledge syntheses are becoming indispensable in keeping up with an exponentially growing eHealth literature, assisting practitioners, academics, and graduate students in finding, evaluating, and ...

  14. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  15. PDF Writing a Critical Review

    A Critical Review of Goodwin et al, 2000, Decision making in Singapore and Australia: the influence of culture on accountants' ethical decisions, Accounting Research Journal, vol.13, no. 2, pp 22-36. Using Hofstede's (1980, 1983 and 1991) and Hofstede and Bond's (1988) five cultural dimensions,

  16. Narrative Reviews: Flexible, Rigorous, and Practical

    A critical review is a narrative synthesis of literature that brings an interpretative lens: the review is shaped by a theory, a critical point of view, or perspectives from other domains to inform the literature analysis. Critical reviews involve an interpretative process that combines the reviewer's theoretical premise with existing theories ...

  17. Types of Reviews

    This site explores different review methodologies such as, systematic, scoping, realist, narrative, state of the art, meta-ethnography, critical, and integrative reviews. The LITR-EX site has a health professions education focus, but the advice and information is widely applicable. Types of Reviews. Review the table to peruse review types and ...

  18. Full article: Methodology or method? A critical review of qualitative

    Study design. The critical review method described by Grant and Booth (Citation 2009) was used, which is appropriate for the assessment of research quality, and is used for literature analysis to inform research and practice.This type of review goes beyond the mapping and description of scoping or rapid reviews, to include "analysis and conceptual innovation" (Grant & Booth, Citation 2009 ...

  19. Jetting Phenomenon in Cold Spray: A Critical Review on ...

    This paper offers a concise critical review of finite element studies of the jetting phenomenon in cold spray (CS). CS is a deposition technique wherein solid particles impact a substrate at high velocities, inducing severe plastic deformation and material deposition. These high-velocity particle impacts lead to the ejection of material in a jet-like shape at the periphery of the particle ...

  20. An overview of methodological approaches in systematic reviews

    1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...

  21. Transformations That Work

    The successful programs, the authors found, employed six critical practices: treating transformation as a continuous process; building it into the company's operating rhythm; explicitly managing ...

  22. Journal of Medical Internet Research

    Background: Patient and staff experience is a vital factor to consider in the evaluation of remote patient monitoring (RPM) interventions. However, no comprehensive overview of available RPM patient and staff experience-measuring methods and tools exists. Objective: This review aimed at obtaining a comprehensive set of experience constructs and corresponding measuring instruments used in ...

  23. Writing, reading, and critiquing reviews

    More recently, authors such as Greenhalgh 4 have drawn attention to the perceived hierarchy of systematic reviews over scoping and narrative reviews. Like Greenhalgh, 4 we argue that systematic reviews are not to be seen as the gold standard of all reviews. Instead, it is important to align the method of review to what the authors hope to achieve, and pursue the review rigorously, according to ...

  24. 5 Project Management Techniques for Tackling Your Next Project

    4. Critical Path. The Critical Path method has been a cornerstone project management plan since the 1940s. It involves first mapping out the most important tasks and then using those to estimate ...

  25. Lockheed Martin and Boeing Compete for Air Force's Next-Generation

    Lockheed Martin and Boeing are poised for a head-to-head competition as the service prepares to award the contract for the Next Generation Air Dominance (NGAD) platform. The NGAD is envisioned as ...