To read this content please select one of the options below:

Please note you do not have access to teaching notes, chapter 1 quantitative research in education: impact on evidence-based instruction.

Current Issues and Trends in Special Education: Research, Technology, and Teacher Preparation

ISBN : 978-1-84950-954-1 , eISBN : 978-1-84950-955-8

Publication date: 23 April 2010

Quantitative research is based on epistemic beliefs that can be traced back to David Hume. Hume and others who followed in his wake suggested that we can never directly observe cause and effect. Rather we perceive what is called “constant conjunction” or the regularities of relationships among events. Through observing these regularities, we can develop generalizable laws that, once established, describe predictable patterns that can be replicated with reliability. This form of reasoning involves studying groups of individuals and is often called nomothetic and is contrasted with idiographic research that focuses on the uniqueness of the individual. It is clear that large-scale experiments with random assignment to treatment are based on nomothetic models, as are quasi-experimental studies where intact groups of people (e.g., students in a particular classroom) are assigned to treatments.

Brigham, F.J. (2010), "Chapter 1 Quantitative research in education: Impact on evidence-based instruction", Obiakor, F.E. , Bakken, J.P. and Rotatori, n.F. (Ed.) Current Issues and Trends in Special Education: Research, Technology, and Teacher Preparation ( Advances in Special Education, Vol. 20 ), Emerald Group Publishing Limited, Leeds, pp. 3-17. https://doi.org/10.1108/S0270-4013(2010)0000020004

Emerald Group Publishing Limited

Copyright © 2010, Emerald Group Publishing Limited

We’re listening — tell us what you think

Something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

  • Technical Support
  • Find My Rep

You are here

Quantitative Research in Education

Quantitative Research in Education A Primer

  • Wayne K. Hoy - Ohio State University, USA
  • Curt M. Adams - University of Oklahoma, USA
  • Description

“ The book provides a reference point for beginning educational researchers to grasp the most pertinent elements of designing and conducting research… ”

— Megan Tschannen-Moran, The College of William & Mary

Quantitative Research in Education: A Primer, Second Edition is a brief and practical text designed to allay anxiety about quantitative research. Award-winning authors Wayne K. Hoy and Curt M. Adams first introduce readers to the nature of research and science, and then present the meaning of concepts and research problems as they dispel notions that quantitative research is too difficult, too theoretical, and not practical. Rich with concrete examples and illustrations, the Primer emphasizes conceptual understanding and the practical utility of quantitative methods while teaching strategies and techniques for developing original research hypotheses.

The Second Edition includes suggestions for empirical investigation and features a new section on self-determination theory, examples from the latest research, a concluding chapter illustrating the practical applications of quantitative research, and much more. This accessible Primer is perfect for students and researchers who want a quick understanding of the process of scientific inquiry and who want to learn how to effectively create and test ideas.

See what’s new to this edition by selecting the Features tab on this page. Should you need additional information or have questions regarding the HEOA information provided for this title, including what is new to this edition, please email [email protected] . Please include your name, contact information, and the name of the title for which you would like more information. For information on the HEOA, please go to http://ed.gov/policy/highered/leg/hea08/index.html .

For assistance with your order: Please email us at [email protected] or connect with your SAGE representative.

SAGE 2455 Teller Road Thousand Oaks, CA 91320 www.sagepub.com

“This text will definitely be useful in providing students with a solid orientation to research design particularly in quantitative research”

“Precision, precision, precision! I think this is a must have companion text for graduate students who have to complete a thesis or dissertation. The author does an outstanding job of cataloging and describing difficult research methods terms in a clear and concise way.”

“Greatest strength is the comprehensiveness of the treatment”

“A reference point for beginning educational researchers to grasp the most pertinent elements of designing and conducting research”

Provides all the essential information for quantitative research in a concise book.

A book on research in education but quite well can be accommodated into other social science areas. A great easy to follow publication especially if someone is new to statistical analysis.

There are two strong chapters in this publication that are clearer and more relevant that the sources presently being used by my students. Chapter 3 is particularly well written and clear and builds a progression in terms of understanding statistics. Chapter 4 is also effective however I would probably place this before Chapter 3. In terms of detail there is probably too much in Chapter 4 on Hypothesis whereas Chapter 3 could be developed perhaps by the inclusion of more examples.

Very helpful book that provides a basis for students undertaking education based research.

For those that are interested in doing research that is quantitative in nature, this book is useful, although we tend to advise a more qualitative approach. Therefore I can see myself dipping in and out of this book as it provides some good explanations and there is follow through. I would have welcomed more working examples as this would have concretised everything a lot more.

This is a good supplement to the research methods module, especially for those students who are entering into the field of education. The quantitative methods discussed are also transferrable to other subjects.

NEW TO THIS EDITION:    

  • A new chapter devoted to the practical applications of education research uses the concepts of collective trust, organizational climate, and improvement science to illustrate the utility of a quantitative approach. It also offers guidelines for analyzing and improving the practice of research in education.
  • New hypotheses found in a variety of research studies are available for readers to analyze and diagram.
  • A new section on self-determination theory has been added to demonstrate the relation between theory and practice.
  • A new section on self-regulatory climate gives readers an opportunity to explore an exciting new area that they are likely to encounter in practice.  
  • A conceptual description of Hierarchical Linear Modeling (HLM) has been added to help readers understand statistical data organized at more than one level.    

KEY FEATURES:  

  • Education-specific concrete examples bring concepts to life and engage readers with relevant, meaningful illustrations.
  • Check Your Understanding exercises and questions assess the reader’s ability to understand, value, and apply the content of the chapter.  
  • Strat egies and techniques for generating hypotheses help readers understand the process of creating their own hypotheses.
  • Key Terms are highlighted in the text when they first appear and then summarized in a list at the end of the chapter to help reinforce key concepts.
  • A Glossary concisely and clearly defines all the key terms in the text so readers have immediate access to ideas and concepts needing review.
  • Charts throughout the text allow readers to select appropriate statistical techniques for given scenarios.
  • The Diagramming Table (in Chapter 4) enables readers to diagram and dissect hypotheses by ensuring the key elements of a hypothesis are considered, analyzed, and understood.
  • An Elements of a Proposal section (Appendix A) gives readers directions for developing a quantitative research plan and motivates readers to get started—the most difficult step for many.
  • The A Few Writing Tips section (Appendix B) lists a number of salient writing suggestions to help readers avoid common mistakes found in formal writing.

IMPORTANCE OF QUANTITATIVE ANALYSIS IN EDUCATION (TEACHING

Profile image of emma boakye

A research is a systematic enquiry aimed at dealing with problems or understanding situations. It is however, the systematic process of collecting and analysing information to increase our understanding of the phenomena under study. It is systematic because it follows a laid down procedure in arriving at the conclusion. There are two major paradigms of research; qualitative and quantitative research. In between the two, there is the mixed research which is a blend of the two. Quantitative research is based on the positivist philosophy of how new knowledge is generated. The positivist philosophy holds that there are facts with objective reality that can be expressed numerically. Hence the emphasis is on measurement. Before the 1980s most studies in education were quantitative in nature. Johnson and Christenesen (2008) outline the characteristics of quantitative research as follows; the confirmatory part of the research circle is emphasized, behaviour is seen and predictable, common aims of research are to be explained and predict. An example of a quantitative research could be a study to determine the amount of time of study and student achievement” in such as study, a researcher has to measure time spent studying and attainment (achievement) and relate them before reaching a conclusion (Amedahe, 2008) The importance of quantitative research is critical in the area of assessment of students and lessons, appraisal of my output over a period of time, the overall performance of school performance in the area of academic and non academic or non curriculum activities such as sports and other social activities. These are some of the critical areas where the impact of quantitative research is seen; Quantitative research is more reliable and objective in assessing students and lessons as data is obtained after lessons have been taught, quantitative research enables me to use statistics to generalise a finding, the relationships between variables are established as cause and effect is highly controlled etc.

Related Papers

DR FREDRICK ONASANYA

Quantitative research is a more consistent, coherent and data-resulted means of arriving in measuring of what people think from a statistical point of view. Quantitative research can gather prominent amount of data can easily be arranged and controlled into reports for analysis. In quantitative research, numerical data are gathered and mathematical based methods are used for analysis Quantitative research is basically about gathering numerical data to explicate a development especially that needs prompt answers using quantitative methods. It is used to measure mental outlook, beliefs, demeanors, and other defined variables that can be used to generalized results from prominent sample populations Quantitative research pertains to taxonomical of practical thorough check of social developments by the way of statistical, mathematical or computational proficiencies. Quantitative research data are gathered by surveys, audits, purchase points and so on. Quantitative research is quantifiable data to develop the truth or facts and reveal practices in research

importance of quantitative research in education

Interdisciplinary Research: Collaborative Insights

Pongsakorn Limna , Dr. Sutithep Siripipattanakul , Tamonwan Sitthipon

In the past few decades, educational practices have changed drastically, particularly regarding how information and learning are delivered and processed. Education research frequently employs quantitative methods. Quantitative education research provides numerical data that can prove or disprove a theory, and administrators can easily share the quantitative findings with other academics and districts. While the study may be based on relative sample size, educators and researchers can extrapolate the results from quantitative data to predict outcomes for larger student populations and groups. Educational research has a long history of utilising measurement and statistical methods. Commonly quantitative methods encompass a variety of statistical tests and instruments. Educators and students could transition to the digital era and research-based knowledge, including quantitative research in advanced higher education, as the technology has advanced. The quantitative research methods in education emphasise basic group designs for research and evaluation, analytic methods for exploring relationships between categorical and continuous measures, and statistical analysis procedures for group design data. The essential is to evaluate quantitative analysis and provide the research process, sampling techniques, the advantages and disadvantages of quantitative research in the article.

BinVa Chang

fauzi kumala akbar

about quantitative research

mark vince agacite

Munyaradzi Moyo

Research on Humanities and Social Sciences

Brian Mumba

How do we decide whether to use a quantitative or qualitative methodology for our study? Quantitative and qualitative research (are they a dichotomy or different ends on a continuum?). How do we analyse and write the results of a study for the research article or our thesis? Further questions can be asked such as; is the paradigm same as research design? How can we spot a paradigm in our research article? Although the questions are answered quietly explicitly, the discussion on the paradigm and research design remains technical. This can be evidenced by the confusion that people still face in differentiating between a paradigm, methodology, approach and design when doing research. The confusion is further worsened by the quantitative versus qualitative research dichotomies. This article addresses quantitative and qualitative research while discussing scientific research paradigms from educational measurement and evaluation perspective.

Research Journal

Dr. Odera C O N Z A L K Amos Ouma

This study explores the possible suitability and efficacy of utilizing a quantitative research approach within the realm of educational psychology. While quantitative research methods have gained recognition across various disciplines, their appropriateness in the intricate and ever-evolving context of educational psychology necessitates thorough examination. This paper delves into the fundamental characteristics of quantitative research, underscoring its ability to furnish empirical evidence and conduct statistical analyses. Furthermore, the research probes into the advantages and challenges associated with the application of quantitative methods in educational psychology research, taking into account considerations such as sample size, generalizability, and the intricate nature of the psychological phenomena under scrutiny. Through a critical review of existing literature and empirical studies, the aim of this research is to provide insights into the feasibility of employing quantitative approaches, addressing reservations linked to the intricate and multifaceted nature of psychological phenomena within educational settings. The outcomes contribute to the ongoing conversation surrounding research methodologies in educational psychology, presenting scholars and practitioners with a nuanced viewpoint on the potential merits and constraints of embracing a quantitative research approach in this dynamic field.

Kezang sherab

Rachel Irish Kozicki

RELATED PAPERS

Willem H De Smet

Applied Intelligence

Vic Rayward-Smith

arXiv: Algebraic Geometry

Simone Melchiorre Chiarello

The American Journal of Cardiology

palaniappan saravanan

Jeff Pelletier

International Journal of Knowledge Engineering and Management

Victor Sordi

Sanja Bračun

BMJ Open Sport & Exercise Medicine

Chris Beedie

Health Systems & Reform

Andy Carmone

Actual Problems of Education

Marisol Robles

Pacific Journalism Review : Te Koakoa

Linda Brady

Texas Heart Institute journal / from the Texas Heart Institute of St. Luke's Episcopal Hospital, Texas Children's Hospital

Suresh Bhagia

Xiaomin Zhang

Eloisa Beling Loose

David Budescu

Bolun Zhang

Curso de teclado - Prof. Helicleiton - Introdução aos acordes

Helicleiton Silva

Arquivos de Gastroenterologia

César Lopes

University of Nottingham

Khaled Rafaat

World journal of emergency surgery : WJES

Matej Skrovina

Ornamental Horticulture

claudia petry

Daoud Jerab

Journal of Chemical Theory and Computation

Bernard Silvi

Chemistry - A European Journal

Jean-Pierre Demoute

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Quantitative Research in Research on the Education and Learning of Adults

  • First Online: 23 May 2019

Cite this chapter

importance of quantitative research in education

  • Ellen Boeren 13  

Part of the book series: Lifelong Learning Book Series ((LLLB,volume 24))

966 Accesses

7 Citations

This chapter starts from the observation that there is a limited presence of quantitative research published in leading adult education journals such as Adult Education Quarterly , Studies in Continuing Education and International Journal of Lifelong Learning . This observation was also discussed by Fejes and Nylander (2015, see also Chap. 7 ). As an adult education scholar mainly working with large quantitative datasets, I aim to provide more insight on what quantitative methods have to offer to the field. I will do this through a brief discussion of the role of methodologies and methods in empirical research, but also by engaging with examples of quantitative research available in the scholarly literature, including a range of existing quantitative scales, and how these can be taken forward in new research as tools to generate the construction of new knowledge. I will first explore potential reasons why the presence of quantitative research in the leading generic adult education journals is so limited.

This chapter is a revised version of a previousely published article: Boeren, E. (2018) The Methodological Underdog: A Review of Quantitative Research in the Key Adult Education Journals. Adult Education Quarterly , 68(1), 63–79.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Because of the word limit of this book chapter, it will be impossible to discuss each survey and its questionnaire in detail. However, both the OECD’s PIAAC and the Eurostat website contain detailed documentation relating to their surveys and can be consulted for free.

See http://www.h2020enliven.org

For example the Essex Summer School in Social Science Data Analysis (see http://www.essex.ac.uk/summerschool /)

Andres, L. (2014). Designing and doing survey research . London: SAGE.

Google Scholar  

Bath, D., & Smith, C. (2009). The relationship between epistemological beliefs and the propensity for lifelong learning. Studies in Continuing Education, 31 (2), 173–189.

Article   Google Scholar  

Bathmaker, A. M. (2007). The impact of Skills for Life on adult basic skills in England: How should we interpret trends in participation and achievement? International Journal of Lifelong Education, 26 (3), 295–313.

Blunt, A., & Yang, B. (2002). Factor structure of the adult attitudes toward adult and continuing education scale and its capacity to predict participation behaviour: Evidence for adoption of a revised scale. Adult Education Quarterly, 52 (4), 299–314.

Boeren, E. (2011). Gender differences in formal, non-formal and informal adult learning. Studies in Continuing Education, 33 (3), 333–346.

Boeren, E. (2016). Lifelong learning participation in a changing policy context, an interdisciplinary theory . Baskingstoke: Palgrave-Macmillan.

Boeren, E., & Holford, J. (2016). Vocationalism varies (a lot): A 12-country multivariate analysis of participation in formal adult learning. Adult Education Quarterly, 66 (2), 120–142.

Boshier, R., Huang, Y., Song, Q., & Song, L. (2006). Market socialism meets the lost generation: Motivational orientations of adult learners in Shanghai. Adult Education Quarterly, 56 (3), 201–222.

Boshier, R. W. (1971). Motivational orientations of adult education participants: A factor analytic exploration of Houle’s typology. Adult Education, 21 , 3–26.

Boyadjieva, P., & Ilieva-Trichkova, P. (2017). Between inclusion and fairness: Social justice perspective to participation in adult education. Adult Education Quarterly, 67 (2), 97–117.

Brinkmann, S., & Kvale, S. (2014). Interviews: Learning the craft of qualitative research interviewing . London: SAGE.

Broek, S., & Hake, B. (2012). Increasing participation of adults in higher education: Factors for successful policies. International Journal of Lifelong Education, 31 (4), 397–417.

Bryman, A. (2012). Social research methods . Oxford: Oxford University Press.

Carney-Crompton, S., & Tan, J. (2002). Support systems, psychological functioning, and academic performance of non-traditional female students. Adult Education Quarterly, 52 (2), 140–154.

Cohen, L., Manion, L., & Morrison, K. (2011). Research methods in education . London: Routledge.

Creswell, J. (2003). Research design: Qualitative, quantitative, and mixed methods approaches . Thousand Oaks: SAGE.

Daley, B., Martin, L., & Roessger, K. (2018). A call for methodological plurality: Reconsidering research approaches in adult education. Adult Education Quarterly, 68 (2), 157–169.

Denzin, N. K., & Lincoln, Y. S. (2003). Handbook of qualitative research . London: SAGE.

Desjardins, R., Rubenson, K., & Milana, M. (2006). Unequal chances to participate in adult learning, international perspectives . Paris: UNESCO.

Desrosières, A. (1998). The politics of large numbers: A history ofstatistical reasoning . Cambridge, MA: Harvard University Press.

Ercikan, K., & Roth, W. (2006). What good is polarizing research into qualitative and quantitative? Educational Researcher, 35 (5), 14–23.

Evers, A., Kreijns, K., & Van der Heijden, B. (2016). The design and validation of an instrument to measure teachers’ professional development at work. Studies in Continuing Education, 38 (2), 162–178.

Fejes, A., & Nylander, E. (2015). How pluralistic is the research field on adult education? Dominating bibliometrical trends, 2005–2012. European Journal for Research on the Education and Learning of Adults, 6 (2), 103–123.

Fink, A. (1995). How to ask survey questions, the survey kit . London: SAGE.

Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behaviour . Redding, MA: Addison-Wesley.

Gage, N. (1989). The paradigm wars and their aftermath. Teachers College Record, 91 (2), 135–150.

Giancola, J., Munz, D., & Trares, S. (2008). First- versus continuing-generation adult students on college perceptions: Are differences actually because of demographic variance? Adult Education Quarterly, 58 (3), 214–228.

Guglielmino, L. (1977). Development of the self-directed learning readiness scale – Doctoral dissertation . Athens: University of Georgia.

Harvey, B., Rothman, A., & Frecker, R. (2006). A confirmatory factor analysis of the Oddi Continuing Learning Inventory (OCLI). Adult Education Quarterly, 56 (3), 188–200.

Hendricks, S. (2001). Contextual and individual factors and the use of influencing tactics in adult education program planning. Adult Education Quarterly, 51 (3), 219–235.

Holford, J., & Mohorcic-Spolar, V. (2012). Neoliberal and inclusive themes in European lifelong learning policy. In S. Riddell, J. Markowitsch, & E. Weedon (Eds.), Lifelong learning in Europe: Equity and efficiency in the balance (pp. 39–62). Bristol: Policy Press.

Chapter   Google Scholar  

Houle, C. O. (1961). The inquiring mind . Wisconsin-Madisson: University of Wisconsin.

Isaac, E., Guy, T., & Valentine, T. (2001). Understanding African American learners’ motivations to learn in church-based adult education. Adult Education Quarterly, 52 (1), 23–38.

Jameson, M., & Fusco, B. (2014). Math anxiety, math self-concept, and math self-efficacy in adult learners compared to traditional undergraduate students. Adult Education Quarterly, 64 (4), 306–322.

Johnson, R., Onwuegbuzie, A., & Turner, A. (2007). Toward a definition of mixed methods research. Journal of Mixed Methods Research, 1 (2), 122–133.

Justice, E., & Dornan, T. (2001). Metacognitive differences between traditional-age and non-traditional-age college students. Adult Education Quarterly, 51 (3), 236–249.

Krupar, A., Horvatek, R., & Byun, S. (2017). Does nonformal education matter? Nonformal education, immigration, and skills in Canada. Adult Education Quarterly, 67 (3), 186–208.

Lavrijsen, J., & Nicaise, I. (2017). Systemic obstacles to lifelong learning: The influence of the educational system design on learning attitudes. Studies in Continuing Education, 39 (2), 176–196.

Lee, S. (2014a). Korean mature women students’ various subjectivities in relation to their motivation for higher education: Generational differences amongst women. International Journal of Lifelong Education, 33 (6), 791–810.

Lee, W. (2014b). Opening up a road to somewhere: Development of associate degree students in Hong Kong. International Journal of Lifelong Education, 33 (5), 607–624.

Likert, R. (1929). A technique for the measurement of attitudes . New York: Columbia University.

Lizzio, A., Stokes, L., & Wilson, K. (2005). Approaches to learning in professional supervision: Supervisee perceptions of processes and outcome. Studies in Continuing Education, 27 (3), 239–256.

McFarland, D. A., Lewis, K., & Goldberg, A. (2016). Sociology in the era of big data: The ascent of forensic social science. The American Sociologist, 47 (12), 9291–9298.

Mulenga, D., & Liang, J.-S. (2008). Motivations for older adults’ participation in distance education: A study at the National Open University of Taiwan. International Journal of Lifelong Education, 27 (3), 289–314.

Piirainen, A., & Viitanen, E. (2010). Transforming expertise from individual to regional community expertise: A four-year study of an education intervention. International Journal of Lifelong Education, 29 (5), 581–596.

Porras-Hernandez, L. H., & Salinas-Amescua, B. (2012). Nonparticipation in adult education: From self-perceptions to alternative explanations. Adult Education Quarterly, 62 (4), 311–331.

Robson, C. (2011). Real world research . Chichester: Wiley.

Roosmaa, E. L., & Saar, E. (2012). Participation in non-formal learning in EU-15 and EU-8 countries: Demand and supply side factors. International Journal of Lifelong Education, 31 (4), 477–501.

Rothes, A., Lemos, M., & Gonçalves, T. (2017). Motivational profiles of adult learners. Adult Education Quarterly, 67 (1), 3–29.

Rubenson, K., & Desjardins, R. (2009). The impact of welfare state regimes on constraints to participation in adult education. A bounded agency model. Adult Education Quarterly, 59 (3), 187–207.

Sellitz, C., Wrightsman, L. S., & Cook, S. W. (1976). Research methods in social relations . New York: Holt, Rinehart & Winston.

Smith, E. (2008). Using secondary data in educational and social research . New York: McGraw-Hill Education.

Steele, B. (1984). The motivational orientations of persisting older learners in the university setting . University of Arkansas. Unpublished doctoral dissertation.

Stockdale, S., & Brockett, R. (2011). Development of the PRO-SDLS: A measurement of self-direction in learning based on the personal responsibility orientation model. Adult Education Quarterly, 61 (2), 161–180.

Tam, M. (2016). Later life learning experiences: Listening to the voices of Chinese elders in Hong Kong. International Journal of Lifelong Education, 35 (5), 569–585.

Tam, M., & Chui, E. (2016). Ageing and learning: What do they mean to elders themselves? Studies in Continuing Education, 38 (2), 195–212.

Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research . Thousand Oaks: SAGE.

Thomas, G. (2009). How to do your research project . London: SAGE.

Vainikainen, M., Wüstenberg, S., Kupiainen, S., Hotulainen, R., & Hautamäki, J. (2015). Development of learning to learn skills in primary school. International Journal of Lifelong Education, 34 (4), 376–392.

van Rhijn, T., & Lero, D. (2014). The influence of self-efficacy beliefs for student parents attending university. International Journal of Lifelong Education, 33 (4), 541–555.

Download references

Author information

Authors and affiliations.

Moray House School of Education, University of Edinburgh, Edinburgh, Scotland, UK

Ellen Boeren

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ellen Boeren .

Editor information

Editors and affiliations.

Department of Behavioural Sciences & Learning, Linköping University, Linköping, Sweden

Andreas Fejes  & Erik Nylander  & 

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Boeren, E. (2019). Quantitative Research in Research on the Education and Learning of Adults. In: Fejes, A., Nylander, E. (eds) Mapping out the Research Field of Adult Education and Learning. Lifelong Learning Book Series, vol 24. Springer, Cham. https://doi.org/10.1007/978-3-030-10946-2_8

Download citation

DOI : https://doi.org/10.1007/978-3-030-10946-2_8

Published : 23 May 2019

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-10945-5

Online ISBN : 978-3-030-10946-2

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Data Collection in Educational Research

Introduction, general overviews.

  • General Quantitative Overviews
  • Questionnaires
  • Quantitative Interviewing
  • Quantitative Observation
  • Technical Properties
  • General Qualitative Overviews
  • In-Depth Interviewing
  • Focus Groups
  • Qualitative Observation
  • Qualitative Document Analysis
  • Visual Analysis

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Case Study in Education Research
  • Grounded Theory
  • Methodologies for Conducting Education Research
  • Mixed Methods Research
  • Qualitative Research Design
  • Statistical Assumptions
  • Using Ethnography in Educational Research

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • English as an International Language for Academic Publishing
  • Girls' Education in the Developing World
  • History of Education in Europe
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Data Collection in Educational Research by James H. McMillan , Laura P. Gogia LAST REVIEWED: 30 June 2014 LAST MODIFIED: 30 June 2014 DOI: 10.1093/obo/9780199756810-0087

Data collection methods in educational research are used to gather information that is then analyzed and interpreted. As such, data collection is a very important step in conducting research and can influence results significantly. Once the research question and sources of data are identified, appropriate methods of data collection are determined. Data collection includes a broad range of more specific techniques. Historically, much of the data collection performed in educational research depended on methods developed for studies in the field of psychology, a discipline which took what is termed a “quantitative” approach. This involves using instruments, scales, Tests , and structured observation and interviewing. By the mid- to late twentieth centuries, other disciplines such as anthropology and sociology began to influence educational researchers. Forms of data collection broadened to include what is now called “qualitative” methods, with an emphasis on narratives, participant perspectives, and less structured observation and interviewing. As contemporary educational researchers also draw from fields such as business, political science, and medicine, data collection in education has become a multidisciplinary phenomenon. Because data collection is such a broad topic, General Overviews that attempt to cover all or most techniques tend to offer introductory treatments. Few texts, however, provide comprehensive coverage of every data collection technique. Instead, some cover techniques appropriate for either quantitative or qualitative research approaches. Even more focus on one or two data collection methods within those two research contexts. Consequently, after presenting general overviews, this entry is categorized by data collection appropriate for quantitative and Qualitative Data Collection . These sections, in turn, are subdivided into the major types of quantitative and qualitative data collection techniques. While there are some data collection techniques specific to mixed method research design, which implies a combination of qualitative and quantitative research methodologies, these specific procedures are not emphasized in the present article—readers are referred to the Oxford Bibliography article Mixed Methods Research by Nancy Leech for a comprehensive treatment of mixed method data collection techniques. To locate sources for this article, extensive searches were performed using general-use Internet search engines and educational, psychological, and social science research databases. These searches included keywords around data collection and research methods, as well as specific data collection techniques such as surveys, Tests , Focus Groups , and observation. Frequently cited texts and articles, most recent editions at the time, and sources specific to educational research were given priority. Once these sources were identified, their suggested readings and reference lists were mined for other potential sources. Works or scholars found in multiple reference lists were investigated. When applicable, book reviews in peer-reviewed journals were located and taken into account when curating sources. Sources that demonstrated a high level of impact or offered unique coverage of the topic were included.

General educational research overviews typically include several chapters on data collection, organized into qualitative and quantitative approaches. As a rule they are updated frequently so that they offer timely discussions of methodological trends. Most of them are introductory in nature, written for student researchers. Because of the influence of psychology and other social sciences on the development of data collection in educational research, representative works of psychology ( Trochim 2006 ) and of general social sciences ( Robson 2011 ) are included. Available online, Trochim 2006 is a reader-friendly introduction that provides succinct explanations of most quantitative and qualitative approaches. Olsen 2012 is helpful in showing how data collection techniques used in other disciplines have implications for educational studies. Specific to education, Gall, et al. 2007 is a frequently cited text that contains most educational data collection techniques, although it tends to emphasize more traditional quantitative approaches. Johnson and Christensen 2014 offers a more balanced treatment meant for novice researchers and educational research consumers. Cohen, et al. 2011 also provides a balanced approach, but from a British perspective. Fielding, et al. 2008 offer practical advice on recently developed forms of online data collection, with special attention given to the ethical ramifications of Internet-based data collection. Finally, Arthur, et al. 2012 is unique in this section in that it is an edited work offering short overviews of data collection techniques authored by contemporary leading experts.

Arthur, James, Michael Waring, Robert Coe, and Larry Hedges, eds. 2012. Research methods and methodologies in education . London: SAGE.

A diverse edited text discussing trends in study designs, data collection, and data analysis. It includes twelve chapters devoted to different forms of data collection, written by authors who have recently published extensively on the topic. Annotated bibliographies found at the end of each chapter provide guidance for further reading.

Cohen, Louis, Lawrence Manion, and Keith Morrison. 2011. Research methods in education . 7th ed. London: Routledge.

This long-running, bestselling, comprehensive source offers practical advice with clear theoretical foundations. The newest edition has undergone significant revision. Specific to data collection, revisions include new chapters devoted to data collection via the Internet and visual media. Slides highlighting main points are available on a supplementary website.

Fielding, Nigel, Raymond Lee, and Grant Blank. 2008. The SAGE handbook of online research methods . Thousand Oaks, CA: SAGE.

This extensive handbook presents chapters on Internet research design and data collection written by leading scholars in the field. It discusses using the Internet as an archival resource and a research tool, focusing on the most recent trends in multidisciplinary Internet research.

Gall, Meredith, Joyce Gall, and Walter Borg. 2007. Educational research: An introduction . 8th ed. White Plains, NY: Pearson.

A long-standing, well-respected, nuts-and-bolts perspective on data collection meant to prepare students for conducting original research. Although it tends to emphasize quantitative research methodologies, it has a uniquely rich chapter on historical document analysis.

Johnson, Burke, and Larry Christensen. 2014. Educational research: Quantitative, qualitative, and mixed approaches . 5th ed. Thousand Oaks, CA: SAGE.

A comprehensive introductory text for the consumer and the would-be researcher, with extensive lists of additional resources for gathering all types of data. It discusses quantitative and qualitative research methodologies and data collection evenly but provides extended coverage of questionnaire construction.

Olsen, Wendy. 2012. Data collection: Key debates and methods in social research . London: SAGE.

This recently published toolkit of quantitative, qualitative, and mixed method approaches to data collection provides a more contemporary introduction for both students and research professionals. It offers a helpful overview of data collection as an integral part of research in several different fields of study.

Robson, Colin. 2011. Real world research: A resource for users of social research methods in applied settings . West Sussex, UK: Wiley

This introductory text is intended for all social science. There is an applied, integrated emphasis on contemporary quantitative and qualitative data collection techniques in a separate section of the book, including individual and focus group observations, surveys, unstructured and structured interviewing, and tests.

Trochim, William. 2006. Research methods knowledge base

A free online hypertext textbook on applied social research methods. Data collection techniques associated with qualitative and quantitative research are covered comprehensively. Foundational information appropriate for undergraduates and early graduate students is presented through a series of easy-to-navigate and intuitively ordered webpages. Printed editions are available for purchase in an edition written with James Donnelly (Atomic Dog/Cengage Learning, 2008).

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Education »
  • Meet the Editorial Board »
  • Academic Achievement
  • Academic Audit for Universities
  • Academic Freedom and Tenure in the United States
  • Action Research in Education
  • Adjuncts in Higher Education in the United States
  • Administrator Preparation
  • Adolescence
  • Advanced Placement and International Baccalaureate Courses
  • Advocacy and Activism in Early Childhood
  • African American Racial Identity and Learning
  • Alaska Native Education
  • Alternative Certification Programs for Educators
  • Alternative Schools
  • American Indian Education
  • Animals in Environmental Education
  • Art Education
  • Artificial Intelligence and Learning
  • Assessing School Leader Effectiveness
  • Assessment, Behavioral
  • Assessment, Educational
  • Assessment in Early Childhood Education
  • Assistive Technology
  • Augmented Reality in Education
  • Beginning-Teacher Induction
  • Bilingual Education and Bilingualism
  • Black Undergraduate Women: Critical Race and Gender Perspe...
  • Blended Learning
  • Changing Professional and Academic Identities
  • Character Education
  • Children’s and Young Adult Literature
  • Children's Beliefs about Intelligence
  • Children's Rights in Early Childhood Education
  • Citizenship Education
  • Civic and Social Engagement of Higher Education
  • Classroom Learning Environments: Assessing and Investigati...
  • Classroom Management
  • Coherent Instructional Systems at the School and School Sy...
  • College Admissions in the United States
  • College Athletics in the United States
  • Community Relations
  • Comparative Education
  • Computer-Assisted Language Learning
  • Computer-Based Testing
  • Conceptualizing, Measuring, and Evaluating Improvement Net...
  • Continuous Improvement and "High Leverage" Educational Pro...
  • Counseling in Schools
  • Critical Approaches to Gender in Higher Education
  • Critical Perspectives on Educational Innovation and Improv...
  • Critical Race Theory
  • Crossborder and Transnational Higher Education
  • Cross-National Research on Continuous Improvement
  • Cross-Sector Research on Continuous Learning and Improveme...
  • Cultural Diversity in Early Childhood Education
  • Culturally Responsive Leadership
  • Culturally Responsive Pedagogies
  • Culturally Responsive Teacher Education in the United Stat...
  • Curriculum Design
  • Data Collection in Educational Research
  • Data-driven Decision Making in the United States
  • Deaf Education
  • Desegregation and Integration
  • Design Thinking and the Learning Sciences: Theoretical, Pr...
  • Development, Moral
  • Dialogic Pedagogy
  • Digital Age Teacher, The
  • Digital Citizenship
  • Digital Divides
  • Disabilities
  • Distance Learning
  • Distributed Leadership
  • Doctoral Education and Training
  • Early Childhood Education and Care (ECEC) in Denmark
  • Early Childhood Education and Development in Mexico
  • Early Childhood Education in Aotearoa New Zealand
  • Early Childhood Education in Australia
  • Early Childhood Education in China
  • Early Childhood Education in Europe
  • Early Childhood Education in Sub-Saharan Africa
  • Early Childhood Education in Sweden
  • Early Childhood Education Pedagogy
  • Early Childhood Education Policy
  • Early Childhood Education, The Arts in
  • Early Childhood Mathematics
  • Early Childhood Science
  • Early Childhood Teacher Education
  • Early Childhood Teachers in Aotearoa New Zealand
  • Early Years Professionalism and Professionalization Polici...
  • Economics of Education
  • Education For Children with Autism
  • Education for Sustainable Development
  • Education Leadership, Empirical Perspectives in
  • Education of Native Hawaiian Students
  • Education Reform and School Change
  • Educational Statistics for Longitudinal Research
  • Educator Partnerships with Parents and Families with a Foc...
  • Emotional and Affective Issues in Environmental and Sustai...
  • Emotional and Behavioral Disorders
  • Environmental and Science Education: Overlaps and Issues
  • Environmental Education
  • Environmental Education in Brazil
  • Epistemic Beliefs
  • Equity and Improvement: Engaging Communities in Educationa...
  • Equity, Ethnicity, Diversity, and Excellence in Education
  • Ethical Research with Young Children
  • Ethics and Education
  • Ethics of Teaching
  • Ethnic Studies
  • Evidence-Based Communication Assessment and Intervention
  • Family and Community Partnerships in Education
  • Family Day Care
  • Federal Government Programs and Issues
  • Feminization of Labor in Academia
  • Finance, Education
  • Financial Aid
  • Formative Assessment
  • Future-Focused Education
  • Gender and Achievement
  • Gender and Alternative Education
  • Gender, Power and Politics in the Academy
  • Gender-Based Violence on University Campuses
  • Gifted Education
  • Global Mindedness and Global Citizenship Education
  • Global University Rankings
  • Governance, Education
  • Growth of Effective Mental Health Services in Schools in t...
  • Higher Education and Globalization
  • Higher Education and the Developing World
  • Higher Education Faculty Characteristics and Trends in the...
  • Higher Education Finance
  • Higher Education Governance
  • Higher Education Graduate Outcomes and Destinations
  • Higher Education in Africa
  • Higher Education in China
  • Higher Education in Latin America
  • Higher Education in the United States, Historical Evolutio...
  • Higher Education, International Issues in
  • Higher Education Management
  • Higher Education Policy
  • Higher Education Research
  • Higher Education Student Assessment
  • High-stakes Testing
  • History of Early Childhood Education in the United States
  • History of Education in the United States
  • History of Technology Integration in Education
  • Homeschooling
  • Inclusion in Early Childhood: Difference, Disability, and ...
  • Inclusive Education
  • Indigenous Education in a Global Context
  • Indigenous Learning Environments
  • Indigenous Students in Higher Education in the United Stat...
  • Infant and Toddler Pedagogy
  • Inservice Teacher Education
  • Integrating Art across the Curriculum
  • Intelligence
  • Intensive Interventions for Children and Adolescents with ...
  • International Perspectives on Academic Freedom
  • Intersectionality and Education
  • Knowledge Development in Early Childhood
  • Leadership Development, Coaching and Feedback for
  • Leadership in Early Childhood Education
  • Leadership Training with an Emphasis on the United States
  • Learning Analytics in Higher Education
  • Learning Difficulties
  • Learning, Lifelong
  • Learning, Multimedia
  • Learning Strategies
  • Legal Matters and Education Law
  • LGBT Youth in Schools
  • Linguistic Diversity
  • Linguistically Inclusive Pedagogy
  • Literacy Development and Language Acquisition
  • Literature Reviews
  • Mathematics Identity
  • Mathematics Instruction and Interventions for Students wit...
  • Mathematics Teacher Education
  • Measurement for Improvement in Education
  • Measurement in Education in the United States
  • Meta-Analysis and Research Synthesis in Education
  • Methodological Approaches for Impact Evaluation in Educati...
  • Mindfulness, Learning, and Education
  • Motherscholars
  • Multiliteracies in Early Childhood Education
  • Multiple Documents Literacy: Theory, Research, and Applica...
  • Multivariate Research Methodology
  • Museums, Education, and Curriculum
  • Music Education
  • Narrative Research in Education
  • Native American Studies
  • Nonformal and Informal Environmental Education
  • Note-Taking
  • Numeracy Education
  • One-to-One Technology in the K-12 Classroom
  • Online Education
  • Open Education
  • Organizing for Continuous Improvement in Education
  • Organizing Schools for the Inclusion of Students with Disa...
  • Outdoor Play and Learning
  • Outdoor Play and Learning in Early Childhood Education
  • Pedagogical Leadership
  • Pedagogy of Teacher Education, A
  • Performance Objectives and Measurement
  • Performance-based Research Assessment in Higher Education
  • Performance-based Research Funding
  • Phenomenology in Educational Research
  • Philosophy of Education
  • Physical Education
  • Podcasts in Education
  • Policy Context of United States Educational Innovation and...
  • Politics of Education
  • Portable Technology Use in Special Education Programs and ...
  • Post-humanism and Environmental Education
  • Pre-Service Teacher Education
  • Problem Solving
  • Productivity and Higher Education
  • Professional Development
  • Professional Learning Communities
  • Program Evaluation
  • Programs and Services for Students with Emotional or Behav...
  • Psychology Learning and Teaching
  • Psychometric Issues in the Assessment of English Language ...
  • Qualitative Data Analysis Techniques
  • Qualitative, Quantitative, and Mixed Methods Research Samp...
  • Quantitative Research Designs in Educational Research
  • Queering the English Language Arts (ELA) Writing Classroom
  • Race and Affirmative Action in Higher Education
  • Reading Education
  • Refugee and New Immigrant Learners
  • Relational and Developmental Trauma and Schools
  • Relational Pedagogies in Early Childhood Education
  • Reliability in Educational Assessments
  • Religion in Elementary and Secondary Education in the Unit...
  • Researcher Development and Skills Training within the Cont...
  • Research-Practice Partnerships in Education within the Uni...
  • Response to Intervention
  • Restorative Practices
  • Risky Play in Early Childhood Education
  • Scale and Sustainability of Education Innovation and Impro...
  • Scaling Up Research-based Educational Practices
  • School Accreditation
  • School Choice
  • School Culture
  • School District Budgeting and Financial Management in the ...
  • School Improvement through Inclusive Education
  • School Reform
  • Schools, Private and Independent
  • School-Wide Positive Behavior Support
  • Science Education
  • Secondary to Postsecondary Transition Issues
  • Self-Regulated Learning
  • Self-Study of Teacher Education Practices
  • Service-Learning
  • Severe Disabilities
  • Single Salary Schedule
  • Single-sex Education
  • Single-Subject Research Design
  • Social Context of Education
  • Social Justice
  • Social Network Analysis
  • Social Pedagogy
  • Social Science and Education Research
  • Social Studies Education
  • Sociology of Education
  • Standards-Based Education
  • Student Access, Equity, and Diversity in Higher Education
  • Student Assignment Policy
  • Student Engagement in Tertiary Education
  • Student Learning, Development, Engagement, and Motivation ...
  • Student Participation
  • Student Voice in Teacher Development
  • Sustainability Education in Early Childhood Education
  • Sustainability in Early Childhood Education
  • Sustainability in Higher Education
  • Teacher Beliefs and Epistemologies
  • Teacher Collaboration in School Improvement
  • Teacher Evaluation and Teacher Effectiveness
  • Teacher Preparation
  • Teacher Training and Development
  • Teacher Unions and Associations
  • Teacher-Student Relationships
  • Teaching Critical Thinking
  • Technologies, Teaching, and Learning in Higher Education
  • Technology Education in Early Childhood
  • Technology, Educational
  • Technology-based Assessment
  • The Bologna Process
  • The Regulation of Standards in Higher Education
  • Theories of Educational Leadership
  • Three Conceptions of Literacy: Media, Narrative, and Gamin...
  • Tracking and Detracking
  • Traditions of Quality Improvement in Education
  • Transformative Learning
  • Transitions in Early Childhood Education
  • Tribally Controlled Colleges and Universities in the Unite...
  • Understanding the Psycho-Social Dimensions of Schools and ...
  • University Faculty Roles and Responsibilities in the Unite...
  • Value of Higher Education for Students and Other Stakehold...
  • Virtual Learning Environments
  • Vocational and Technical Education
  • Wellness and Well-Being in Education
  • Women's and Gender Studies
  • Young Children and Spirituality
  • Young Children's Learning Dispositions
  • Young Children's Working Theories
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [66.249.64.20|185.80.151.9]
  • 185.80.151.9

Quantitative research in education : Recent e-books

  • Recent e-books

Cover Art

  • << Previous: Background information
  • Next: Recent print books >>
  • Background information
  • Recent print books
  • Connect to Stanford e-resources
  • Last Updated: Jan 23, 2024 12:46 PM
  • URL: https://guides.library.stanford.edu/quantitative_research_in_ed

Qualitative vs. Quantitative Research: Comparing the Methods and Strategies for Education Research

A woman sits at a library table with stacks of books and a laptop.

No matter the field of study, all research can be divided into two distinct methodologies: qualitative and quantitative research. Both methodologies offer education researchers important insights.

Education research assesses problems in policy, practices, and curriculum design, and it helps administrators identify solutions. Researchers can conduct small-scale studies to learn more about topics related to instruction or larger-scale ones to gain insight into school systems and investigate how to improve student outcomes.

Education research often relies on the quantitative methodology. Quantitative research in education provides numerical data that can prove or disprove a theory, and administrators can easily share the number-based results with other schools and districts. And while the research may speak to a relatively small sample size, educators and researchers can scale the results from quantifiable data to predict outcomes in larger student populations and groups.

Qualitative vs. Quantitative Research in Education: Definitions

Although there are many overlaps in the objectives of qualitative and quantitative research in education, researchers must understand the fundamental functions of each methodology in order to design and carry out an impactful research study. In addition, they must understand the differences that set qualitative and quantitative research apart in order to determine which methodology is better suited to specific education research topics.

Generate Hypotheses with Qualitative Research

Qualitative research focuses on thoughts, concepts, or experiences. The data collected often comes in narrative form and concentrates on unearthing insights that can lead to testable hypotheses. Educators use qualitative research in a study’s exploratory stages to uncover patterns or new angles.

Form Strong Conclusions with Quantitative Research

Quantitative research in education and other fields of inquiry is expressed in numbers and measurements. This type of research aims to find data to confirm or test a hypothesis.

Differences in Data Collection Methods

Keeping in mind the main distinction in qualitative vs. quantitative research—gathering descriptive information as opposed to numerical data—it stands to reason that there are different ways to acquire data for each research methodology. While certain approaches do overlap, the way researchers apply these collection techniques depends on their goal.

Interviews, for example, are common in both modes of research. An interview with students that features open-ended questions intended to reveal ideas and beliefs around attendance will provide qualitative data. This data may reveal a problem among students, such as a lack of access to transportation, that schools can help address.

An interview can also include questions posed to receive numerical answers. A case in point: how many days a week do students have trouble getting to school, and of those days, how often is a transportation-related issue the cause? In this example, qualitative and quantitative methodologies can lead to similar conclusions, but the research will differ in intent, design, and form.

Taking a look at behavioral observation, another common method used for both qualitative and quantitative research, qualitative data may consider a variety of factors, such as facial expressions, verbal responses, and body language.

On the other hand, a quantitative approach will create a coding scheme for certain predetermined behaviors and observe these in a quantifiable manner.

Qualitative Research Methods

  • Case Studies : Researchers conduct in-depth investigations into an individual, group, event, or community, typically gathering data through observation and interviews.
  • Focus Groups : A moderator (or researcher) guides conversation around a specific topic among a group of participants.
  • Ethnography : Researchers interact with and observe a specific societal or ethnic group in their real-life environment.
  • Interviews : Researchers ask participants questions to learn about their perspectives on a particular subject.

Quantitative Research Methods

  • Questionnaires and Surveys : Participants receive a list of questions, either closed-ended or multiple choice, which are directed around a particular topic.
  • Experiments : Researchers control and test variables to demonstrate cause-and-effect relationships.
  • Observations : Researchers look at quantifiable patterns and behavior.
  • Structured Interviews : Using a predetermined structure, researchers ask participants a fixed set of questions to acquire numerical data.

Choosing a Research Strategy

When choosing which research strategy to employ for a project or study, a number of considerations apply. One key piece of information to help determine whether to use a qualitative vs. quantitative research method is which phase of development the study is in.

For example, if a project is in its early stages and requires more research to find a testable hypothesis, qualitative research methods might prove most helpful. On the other hand, if the research team has already established a hypothesis or theory, quantitative research methods will provide data that can validate the theory or refine it for further testing.

It’s also important to understand a project’s research goals. For instance, do researchers aim to produce findings that reveal how to best encourage student engagement in math? Or is the goal to determine how many students are passing geometry? These two scenarios require distinct sets of data, which will determine the best methodology to employ.

In some situations, studies will benefit from a mixed-methods approach. Using the goals in the above example, one set of data could find the percentage of students passing geometry, which would be quantitative. The research team could also lead a focus group with the students achieving success to discuss which techniques and teaching practices they find most helpful, which would produce qualitative data.

Learn How to Put Education Research into Action

Those with an interest in learning how to harness research to develop innovative ideas to improve education systems may want to consider pursuing a doctoral degree. American University’s School of Education online offers a Doctor of Education (EdD) in Education Policy and Leadership that prepares future educators, school administrators, and other education professionals to become leaders who effect positive changes in schools. Courses such as Applied Research Methods I: Enacting Critical Research provides students with the techniques and research skills needed to begin conducting research exploring new ways to enhance education. Learn more about American’ University’s EdD in Education Policy and Leadership .

What’s the Difference Between Educational Equity and Equality?

EdD vs. PhD in Education: Requirements, Career Outlook, and Salary

Top Education Technology Jobs for Doctorate in Education Graduates

American University, EdD in Education Policy and Leadership

Edutopia, “2019 Education Research Highlights”

Formplus, “Qualitative vs. Quantitative Data: 15 Key Differences and Similarities”

iMotion, “Qualitative vs. Quantitative Research: What Is What?”

Scribbr, “Qualitative vs. Quantitative Research”

Simply Psychology, “What’s the Difference Between Quantitative and Qualitative Research?”

Typeform, “A Simple Guide to Qualitative and Quantitative Research”

Request Information

Advertisement

Issue Cover

  • Previous Issue
  • Previous Article
  • Next Article

Clarifying the Research Purpose

Methodology, measurement, data analysis and interpretation, tools for evaluating the quality of medical education research, research support, competing interests, quantitative research methods in medical education.

Submitted for publication January 8, 2018. Accepted for publication November 29, 2018.

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Open the PDF for in another window
  • Cite Icon Cite
  • Get Permissions
  • Search Site

John T. Ratelle , Adam P. Sawatsky , Thomas J. Beckman; Quantitative Research Methods in Medical Education. Anesthesiology 2019; 131:23–35 doi: https://doi.org/10.1097/ALN.0000000000002727

Download citation file:

  • Ris (Zotero)
  • Reference Manager

There has been a dramatic growth of scholarly articles in medical education in recent years. Evaluating medical education research requires specific orientation to issues related to format and content. Our goal is to review the quantitative aspects of research in medical education so that clinicians may understand these articles with respect to framing the study, recognizing methodologic issues, and utilizing instruments for evaluating the quality of medical education research. This review can be used both as a tool when appraising medical education research articles and as a primer for clinicians interested in pursuing scholarship in medical education.

Image: J. P. Rathmell and Terri Navarette.

Image: J. P. Rathmell and Terri Navarette.

There has been an explosion of research in the field of medical education. A search of PubMed demonstrates that more than 40,000 articles have been indexed under the medical subject heading “Medical Education” since 2010, which is more than the total number of articles indexed under this heading in the 1980s and 1990s combined. Keeping up to date requires that practicing clinicians have the skills to interpret and appraise the quality of research articles, especially when serving as editors, reviewers, and consumers of the literature.

While medical education shares many characteristics with other biomedical fields, substantial particularities exist. We recognize that practicing clinicians may not be familiar with the nuances of education research and how to assess its quality. Therefore, our purpose is to provide a review of quantitative research methodologies in medical education. Specifically, we describe a structure that can be used when conducting or evaluating medical education research articles.

Clarifying the research purpose is an essential first step when reading or conducting scholarship in medical education. 1   Medical education research can serve a variety of purposes, from advancing the science of learning to improving the outcomes of medical trainees and the patients they care for. However, a well-designed study has limited value if it addresses vague, redundant, or unimportant medical education research questions.

What is the research topic and why is it important? What is unknown about the research topic? Why is further research necessary?

What is the conceptual framework being used to approach the study?

What is the statement of study intent?

What are the research methodology and study design? Are they appropriate for the study objective(s)?

Which threats to internal validity are most relevant for the study?

What is the outcome and how was it measured?

Can the results be trusted? What is the validity and reliability of the measurements?

How were research subjects selected? Is the research sample representative of the target population?

Was the data analysis appropriate for the study design and type of data?

What is the effect size? Do the results have educational significance?

Fortunately, there are steps to ensure that the purpose of a research study is clear and logical. Table 1   2–5   outlines these steps, which will be described in detail in the following sections. We describe these elements not as a simple “checklist,” but as an advanced organizer that can be used to understand a medical education research study. These steps can also be used by clinician educators who are new to the field of education research and who wish to conduct scholarship in medical education.

Steps in Clarifying the Purpose of a Research Study in Medical Education

Steps in Clarifying the Purpose of a Research Study in Medical Education

Literature Review and Problem Statement

A literature review is the first step in clarifying the purpose of a medical education research article. 2 , 5 , 6   When conducting scholarship in medical education, a literature review helps researchers develop an understanding of their topic of interest. This understanding includes both existing knowledge about the topic as well as key gaps in the literature, which aids the researcher in refining their study question. Additionally, a literature review helps researchers identify conceptual frameworks that have been used to approach the research topic. 2  

When reading scholarship in medical education, a successful literature review provides background information so that even someone unfamiliar with the research topic can understand the rationale for the study. Located in the introduction of the manuscript, the literature review guides the reader through what is already known in a manner that highlights the importance of the research topic. The literature review should also identify key gaps in the literature so the reader can understand the need for further research. This gap description includes an explicit problem statement that summarizes the important issues and provides a reason for the study. 2 , 4   The following is one example of a problem statement:

“Identifying gaps in the competency of anesthesia residents in time for intervention is critical to patient safety and an effective learning system… [However], few available instruments relate to complex behavioral performance or provide descriptors…that could inform subsequent feedback, individualized teaching, remediation, and curriculum revision.” 7  

This problem statement articulates the research topic (identifying resident performance gaps), why it is important (to intervene for the sake of learning and patient safety), and current gaps in the literature (few tools are available to assess resident performance). The researchers have now underscored why further research is needed and have helped readers anticipate the overarching goals of their study (to develop an instrument to measure anesthesiology resident performance). 4  

The Conceptual Framework

Following the literature review and articulation of the problem statement, the next step in clarifying the research purpose is to select a conceptual framework that can be applied to the research topic. Conceptual frameworks are “ways of thinking about a problem or a study, or ways of representing how complex things work.” 3   Just as clinical trials are informed by basic science research in the laboratory, conceptual frameworks often serve as the “basic science” that informs scholarship in medical education. At a fundamental level, conceptual frameworks provide a structured approach to solving the problem identified in the problem statement.

Conceptual frameworks may take the form of theories, principles, or models that help to explain the research problem by identifying its essential variables or elements. Alternatively, conceptual frameworks may represent evidence-based best practices that researchers can apply to an issue identified in the problem statement. 3   Importantly, there is no single best conceptual framework for a particular research topic, although the choice of a conceptual framework is often informed by the literature review and knowing which conceptual frameworks have been used in similar research. 8   For further information on selecting a conceptual framework for research in medical education, we direct readers to the work of Bordage 3   and Irby et al. 9  

To illustrate how different conceptual frameworks can be applied to a research problem, suppose you encounter a study to reduce the frequency of communication errors among anesthesiology residents during day-to-night handoff. Table 2 10 , 11   identifies two different conceptual frameworks researchers might use to approach the task. The first framework, cognitive load theory, has been proposed as a conceptual framework to identify potential variables that may lead to handoff errors. 12   Specifically, cognitive load theory identifies the three factors that affect short-term memory and thus may lead to communication errors:

Conceptual Frameworks to Address the Issue of Handoff Errors in the Intensive Care Unit

Conceptual Frameworks to Address the Issue of Handoff Errors in the Intensive Care Unit

Intrinsic load: Inherent complexity or difficulty of the information the resident is trying to learn ( e.g. , complex patients).

Extraneous load: Distractions or demands on short-term memory that are not related to the information the resident is trying to learn ( e.g. , background noise, interruptions).

Germane load: Effort or mental strategies used by the resident to organize and understand the information he/she is trying to learn ( e.g. , teach back, note taking).

Using cognitive load theory as a conceptual framework, researchers may design an intervention to reduce extraneous load and help the resident remember the overnight to-do’s. An example might be dedicated, pager-free handoff times where distractions are minimized.

The second framework identified in table 2 , the I-PASS (Illness severity, Patient summary, Action list, Situational awareness and contingency planning, and Synthesis by receiver) handoff mnemonic, 11   is an evidence-based best practice that, when incorporated as part of a handoff bundle, has been shown to reduce handoff errors on pediatric wards. 13   Researchers choosing this conceptual framework may adapt some or all of the I-PASS elements for resident handoffs in the intensive care unit.

Note that both of the conceptual frameworks outlined above provide researchers with a structured approach to addressing the issue of handoff errors; one is not necessarily better than the other. Indeed, it is possible for researchers to use both frameworks when designing their study. Ultimately, we provide this example to demonstrate the necessity of selecting conceptual frameworks to clarify the research purpose. 3 , 8   Readers should look for conceptual frameworks in the introduction section and should be wary of their omission, as commonly seen in less well-developed medical education research articles. 14  

Statement of Study Intent

After reviewing the literature, articulating the problem statement, and selecting a conceptual framework to address the research topic, the final step in clarifying the research purpose is the statement of study intent. The statement of study intent is arguably the most important element of framing the study because it makes the research purpose explicit. 2   Consider the following example:

This study aimed to test the hypothesis that the introduction of the BASIC Examination was associated with an accelerated knowledge acquisition during residency training, as measured by increments in annual ITE scores. 15  

This statement of study intent succinctly identifies several key study elements including the population (anesthesiology residents), the intervention/independent variable (introduction of the BASIC Examination), the outcome/dependent variable (knowledge acquisition, as measure by in In-training Examination [ITE] scores), and the hypothesized relationship between the independent and dependent variable (the authors hypothesize a positive correlation between the BASIC examination and the speed of knowledge acquisition). 6 , 14  

The statement of study intent will sometimes manifest as a research objective, rather than hypothesis or question. In such instances there may not be explicit independent and dependent variables, but the study population and research aim should be clearly identified. The following is an example:

“In this report, we present the results of 3 [years] of course data with respect to the practice improvements proposed by participating anesthesiologists and their success in implementing those plans. Specifically, our primary aim is to assess the frequency and type of improvements that were completed and any factors that influence completion.” 16  

The statement of study intent is the logical culmination of the literature review, problem statement, and conceptual framework, and is a transition point between the Introduction and Methods sections of a medical education research report. Nonetheless, a systematic review of experimental research in medical education demonstrated that statements of study intent are absent in the majority of articles. 14   When reading a medical education research article where the statement of study intent is absent, it may be necessary to infer the research aim by gathering information from the Introduction and Methods sections. In these cases, it can be useful to identify the following key elements 6 , 14 , 17   :

Population of interest/type of learner ( e.g. , pain medicine fellow or anesthesiology residents)

Independent/predictor variable ( e.g. , educational intervention or characteristic of the learners)

Dependent/outcome variable ( e.g. , intubation skills or knowledge of anesthetic agents)

Relationship between the variables ( e.g. , “improve” or “mitigate”)

Occasionally, it may be difficult to differentiate the independent study variable from the dependent study variable. 17   For example, consider a study aiming to measure the relationship between burnout and personal debt among anesthesiology residents. Do the researchers believe burnout might lead to high personal debt, or that high personal debt may lead to burnout? This “chicken or egg” conundrum reinforces the importance of the conceptual framework which, if present, should serve as an explanation or rationale for the predicted relationship between study variables.

Research methodology is the “…design or plan that shapes the methods to be used in a study.” 1   Essentially, methodology is the general strategy for answering a research question, whereas methods are the specific steps and techniques that are used to collect data and implement the strategy. Our objective here is to provide an overview of quantitative methodologies ( i.e. , approaches) in medical education research.

The choice of research methodology is made by balancing the approach that best answers the research question against the feasibility of completing the study. There is no perfect methodology because each has its own potential caveats, flaws and/or sources of bias. Before delving into an overview of the methodologies, it is important to highlight common sources of bias in education research. We use the term internal validity to describe the degree to which the findings of a research study represent “the truth,” as opposed to some alternative hypothesis or variables. 18   Table 3   18–20   provides a list of common threats to internal validity in medical education research, along with tactics to mitigate these threats.

Threats to Internal Validity and Strategies to Mitigate Their Effects

Threats to Internal Validity and Strategies to Mitigate Their Effects

Experimental Research

The fundamental tenet of experimental research is the manipulation of an independent or experimental variable to measure its effect on a dependent or outcome variable.

True Experiment

True experimental study designs minimize threats to internal validity by randomizing study subjects to experimental and control groups. Through ensuring that differences between groups are—beyond the intervention/variable of interest—purely due to chance, researchers reduce the internal validity threats related to subject characteristics, time-related maturation, and regression to the mean. 18 , 19  

Quasi-experiment

There are many instances in medical education where randomization may not be feasible or ethical. For instance, researchers wanting to test the effect of a new curriculum among medical students may not be able to randomize learners due to competing curricular obligations and schedules. In these cases, researchers may be forced to assign subjects to experimental and control groups based upon some other criterion beyond randomization, such as different classrooms or different sections of the same course. This process, called quasi-randomization, does not inherently lead to internal validity threats, as long as research investigators are mindful of measuring and controlling for extraneous variables between study groups. 19  

Single-group Methodologies

All experimental study designs compare two or more groups: experimental and control. A common experimental study design in medical education research is the single-group pretest–posttest design, which compares a group of learners before and after the implementation of an intervention. 21   In essence, a single-group pre–post design compares an experimental group ( i.e. , postintervention) to a “no-intervention” control group ( i.e. , preintervention). 19   This study design is problematic for several reasons. Consider the following hypothetical example: A research article reports the effects of a year-long intubation curriculum for first-year anesthesiology residents. All residents participate in monthly, half-day workshops over the course of an academic year. The article reports a positive effect on residents’ skills as demonstrated by a significant improvement in intubation success rates at the end of the year when compared to the beginning.

This study does little to advance the science of learning among anesthesiology residents. While this hypothetical report demonstrates an improvement in residents’ intubation success before versus after the intervention, it does not tell why the workshop worked, how it compares to other educational interventions, or how it fits in to the broader picture of anesthesia training.

Single-group pre–post study designs open themselves to a myriad of threats to internal validity. 20   In our hypothetical example, the improvement in residents’ intubation skills may have been due to other educational experience(s) ( i.e. , implementation threat) and/or improvement in manual dexterity that occurred naturally with time ( i.e. , maturation threat), rather than the airway curriculum. Consequently, single-group pre–post studies should be interpreted with caution. 18  

Repeated testing, before and after the intervention, is one strategy that can be used to reduce the some of the inherent limitations of the single-group study design. Repeated pretesting can mitigate the effect of regression toward the mean, a statistical phenomenon whereby low pretest scores tend to move closer to the mean on subsequent testing (regardless of intervention). 20   Likewise, repeated posttesting at multiple time intervals can provide potentially useful information about the short- and long-term effects of an intervention ( e.g. , the “durability” of the gain in knowledge, skill, or attitude).

Observational Research

Unlike experimental studies, observational research does not involve manipulation of any variables. These studies often involve measuring associations, developing psychometric instruments, or conducting surveys.

Association Research

Association research seeks to identify relationships between two or more variables within a group or groups (correlational research), or similarities/differences between two or more existing groups (causal–comparative research). For example, correlational research might seek to measure the relationship between burnout and educational debt among anesthesiology residents, while causal–comparative research may seek to measure differences in educational debt and/or burnout between anesthesiology and surgery residents. Notably, association research may identify relationships between variables, but does not necessarily support a causal relationship between them.

Psychometric and Survey Research

Psychometric instruments measure a psychologic or cognitive construct such as knowledge, satisfaction, beliefs, and symptoms. Surveys are one type of psychometric instrument, but many other types exist, such as evaluations of direct observation, written examinations, or screening tools. 22   Psychometric instruments are ubiquitous in medical education research and can be used to describe a trait within a study population ( e.g. , rates of depression among medical students) or to measure associations between study variables ( e.g. , association between depression and board scores among medical students).

Psychometric and survey research studies are prone to the internal validity threats listed in table 3 , particularly those relating to mortality, location, and instrumentation. 18   Additionally, readers must ensure that the instrument scores can be trusted to truly represent the construct being measured. For example, suppose you encounter a research article demonstrating a positive association between attending physician teaching effectiveness as measured by a survey of medical students, and the frequency with which the attending physician provides coffee and doughnuts on rounds. Can we be confident that this survey administered to medical students is truly measuring teaching effectiveness? Or is it simply measuring the attending physician’s “likability”? Issues related to measurement and the trustworthiness of data are described in detail in the following section on measurement and the related issues of validity and reliability.

Measurement refers to “the assigning of numbers to individuals in a systematic way as a means of representing properties of the individuals.” 23   Research data can only be trusted insofar as we trust the measurement used to obtain the data. Measurement is of particular importance in medical education research because many of the constructs being measured ( e.g. , knowledge, skill, attitudes) are abstract and subject to measurement error. 24   This section highlights two specific issues related to the trustworthiness of data: the validity and reliability of measurements.

Validity regarding the scores of a measurement instrument “refers to the degree to which evidence and theory support the interpretations of the [instrument’s results] for the proposed use of the [instrument].” 25   In essence, do we believe the results obtained from a measurement really represent what we were trying to measure? Note that validity evidence for the scores of a measurement instrument is separate from the internal validity of a research study. Several frameworks for validity evidence exist. Table 4 2 , 22 , 26   represents the most commonly used framework, developed by Messick, 27   which identifies sources of validity evidence—to support the target construct—from five main categories: content, response process, internal structure, relations to other variables, and consequences.

Sources of Validity Evidence for Measurement Instruments

Sources of Validity Evidence for Measurement Instruments

Reliability

Reliability refers to the consistency of scores for a measurement instrument. 22 , 25 , 28   For an instrument to be reliable, we would anticipate that two individuals rating the same object of measurement in a specific context would provide the same scores. 25   Further, if the scores for an instrument are reliable between raters of the same object of measurement, then we can extrapolate that any difference in scores between two objects represents a true difference across the sample, and is not due to random variation in measurement. 29   Reliability can be demonstrated through a variety of methods such as internal consistency ( e.g. , Cronbach’s alpha), temporal stability ( e.g. , test–retest reliability), interrater agreement ( e.g. , intraclass correlation coefficient), and generalizability theory (generalizability coefficient). 22 , 29  

Example of a Validity and Reliability Argument

This section provides an illustration of validity and reliability in medical education. We use the signaling questions outlined in table 4 to make a validity and reliability argument for the Harvard Assessment of Anesthesia Resident Performance (HARP) instrument. 7   The HARP was developed by Blum et al. to measure the performance of anesthesia trainees that is required to provide safe anesthetic care to patients. According to the authors, the HARP is designed to be used “…as part of a multiscenario, simulation-based assessment” of resident performance. 7  

Content Validity: Does the Instrument’s Content Represent the Construct Being Measured?

To demonstrate content validity, instrument developers should describe the construct being measured and how the instrument was developed, and justify their approach. 25   The HARP is intended to measure resident performance in the critical domains required to provide safe anesthetic care. As such, investigators note that the HARP items were created through a two-step process. First, the instrument’s developers interviewed anesthesiologists with experience in resident education to identify the key traits needed for successful completion of anesthesia residency training. Second, the authors used a modified Delphi process to synthesize the responses into five key behaviors: (1) formulate a clear anesthetic plan, (2) modify the plan under changing conditions, (3) communicate effectively, (4) identify performance improvement opportunities, and (5) recognize one’s limits. 7 , 30  

Response Process Validity: Are Raters Interpreting the Instrument Items as Intended?

In the case of the HARP, the developers included a scoring rubric with behavioral anchors to ensure that faculty raters could clearly identify how resident performance in each domain should be scored. 7  

Internal Structure Validity: Do Instrument Items Measuring Similar Constructs Yield Homogenous Results? Do Instrument Items Measuring Different Constructs Yield Heterogeneous Results?

Item-correlation for the HARP demonstrated a high degree of correlation between some items ( e.g. , formulating a plan and modifying the plan under changing conditions) and a lower degree of correlation between other items ( e.g. , formulating a plan and identifying performance improvement opportunities). 30   This finding is expected since the items within the HARP are designed to assess separate performance domains, and we would expect residents’ functioning to vary across domains.

Relationship to Other Variables’ Validity: Do Instrument Scores Correlate with Other Measures of Similar or Different Constructs as Expected?

As it applies to the HARP, one would expect that the performance of anesthesia residents will improve over the course of training. Indeed, HARP scores were found to be generally higher among third-year residents compared to first-year residents. 30  

Consequence Validity: Are Instrument Results Being Used as Intended? Are There Unintended or Negative Uses of the Instrument Results?

While investigators did not intentionally seek out consequence validity evidence for the HARP, unanticipated consequences of HARP scores were identified by the authors as follows:

“Data indicated that CA-3s had a lower percentage of worrisome scores (rating 2 or lower) than CA-1s… However, it is concerning that any CA-3s had any worrisome scores…low performance of some CA-3 residents, albeit in the simulated environment, suggests opportunities for training improvement.” 30  

That is, using the HARP to measure the performance of CA-3 anesthesia residents had the unintended consequence of identifying the need for improvement in resident training.

Reliability: Are the Instrument’s Scores Reproducible and Consistent between Raters?

The HARP was applied by two raters for every resident in the study across seven different simulation scenarios. The investigators conducted a generalizability study of HARP scores to estimate the variance in assessment scores that was due to the resident, the rater, and the scenario. They found little variance was due to the rater ( i.e. , scores were consistent between raters), indicating a high level of reliability. 7  

Sampling refers to the selection of research subjects ( i.e. , the sample) from a larger group of eligible individuals ( i.e. , the population). 31   Effective sampling leads to the inclusion of research subjects who represent the larger population of interest. Alternatively, ineffective sampling may lead to the selection of research subjects who are significantly different from the target population. Imagine that researchers want to explore the relationship between burnout and educational debt among pain medicine specialists. The researchers distribute a survey to 1,000 pain medicine specialists (the population), but only 300 individuals complete the survey (the sample). This result is problematic because the characteristics of those individuals who completed the survey and the entire population of pain medicine specialists may be fundamentally different. It is possible that the 300 study subjects may be experiencing more burnout and/or debt, and thus, were more motivated to complete the survey. Alternatively, the 700 nonresponders might have been too busy to respond and even more burned out than the 300 responders, which would suggest that the study findings were even more amplified than actually observed.

When evaluating a medical education research article, it is important to identify the sampling technique the researchers employed, how it might have influenced the results, and whether the results apply to the target population. 24  

Sampling Techniques

Sampling techniques generally fall into two categories: probability- or nonprobability-based. Probability-based sampling ensures that each individual within the target population has an equal opportunity of being selected as a research subject. Most commonly, this is done through random sampling, which should lead to a sample of research subjects that is similar to the target population. If significant differences between sample and population exist, those differences should be due to random chance, rather than systematic bias. The difference between data from a random sample and that from the population is referred to as sampling error. 24  

Nonprobability-based sampling involves selecting research participants such that inclusion of some individuals may be more likely than the inclusion of others. 31   Convenience sampling is one such example and involves selection of research subjects based upon ease or opportuneness. Convenience sampling is common in medical education research, but, as outlined in the example at the beginning of this section, it can lead to sampling bias. 24   When evaluating an article that uses nonprobability-based sampling, it is important to look for participation/response rate. In general, a participation rate of less than 75% should be viewed with skepticism. 21   Additionally, it is important to determine whether characteristics of participants and nonparticipants were reported and if significant differences between the two groups exist.

Interpreting medical education research requires a basic understanding of common ways in which quantitative data are analyzed and displayed. In this section, we highlight two broad topics that are of particular importance when evaluating research articles.

The Nature of the Measurement Variable

Measurement variables in quantitative research generally fall into three categories: nominal, ordinal, or interval. 24   Nominal variables (sometimes called categorical variables) involve data that can be placed into discrete categories without a specific order or structure. Examples include sex (male or female) and professional degree (M.D., D.O., M.B.B.S., etc .) where there is no clear hierarchical order to the categories. Ordinal variables can be ranked according to some criterion, but the spacing between categories may not be equal. Examples of ordinal variables may include measurements of satisfaction (satisfied vs . unsatisfied), agreement (disagree vs . agree), and educational experience (medical student, resident, fellow). As it applies to educational experience, it is noteworthy that even though education can be quantified in years, the spacing between years ( i.e. , educational “growth”) remains unequal. For instance, the difference in performance between second- and third-year medical students is dramatically different than third- and fourth-year medical students. Interval variables can also be ranked according to some criteria, but, unlike ordinal variables, the spacing between variable categories is equal. Examples of interval variables include test scores and salary. However, the conceptual boundaries between these measurement variables are not always clear, as in the case where ordinal scales can be assumed to have the properties of an interval scale, so long as the data’s distribution is not substantially skewed. 32  

Understanding the nature of the measurement variable is important when evaluating how the data are analyzed and reported. Medical education research commonly uses measurement instruments with items that are rated on Likert-type scales, whereby the respondent is asked to assess their level of agreement with a given statement. The response is often translated into a corresponding number ( e.g. , 1 = strongly disagree, 3 = neutral, 5 = strongly agree). It is remarkable that scores from Likert-type scales are sometimes not normally distributed ( i.e. , are skewed toward one end of the scale), indicating that the spacing between scores is unequal and the variable is ordinal in nature. In these cases, it is recommended to report results as frequencies or medians, rather than means and SDs. 33  

Consider an article evaluating medical students’ satisfaction with a new curriculum. Researchers measure satisfaction using a Likert-type scale (1 = very unsatisfied, 2 = unsatisfied, 3 = neutral, 4 = satisfied, 5 = very satisfied). A total of 20 medical students evaluate the curriculum, 10 of whom rate their satisfaction as “satisfied,” and 10 of whom rate it as “very satisfied.” In this case, it does not make much sense to report an average score of 4.5; it makes more sense to report results in terms of frequency ( e.g. , half of the students were “very satisfied” with the curriculum, and half were not).

Effect Size and CIs

In medical education, as in other research disciplines, it is common to report statistically significant results ( i.e. , small P values) in order to increase the likelihood of publication. 34 , 35   However, a significant P value in itself does necessarily represent the educational impact of the study results. A statement like “Intervention x was associated with a significant improvement in learners’ intubation skill compared to education intervention y ( P < 0.05)” tells us that there was a less than 5% chance that the difference in improvement between interventions x and y was due to chance. Yet that does not mean that the study intervention necessarily caused the nonchance results, or indicate whether the between-group difference is educationally significant. Therefore, readers should consider looking beyond the P value to effect size and/or CI when interpreting the study results. 36 , 37  

Effect size is “the magnitude of the difference between two groups,” which helps to quantify the educational significance of the research results. 37   Common measures of effect size include Cohen’s d (standardized difference between two means), risk ratio (compares binary outcomes between two groups), and Pearson’s r correlation (linear relationship between two continuous variables). 37   CIs represent “a range of values around a sample mean or proportion” and are a measure of precision. 31   While effect size and CI give more useful information than simple statistical significance, they are commonly omitted from medical education research articles. 35   In such instances, readers should be wary of overinterpreting a P value in isolation. For further information effect size and CI, we direct readers the work of Sullivan and Feinn 37   and Hulley et al. 31  

In this final section, we identify instruments that can be used to evaluate the quality of quantitative medical education research articles. To this point, we have focused on framing the study and research methodologies and identifying potential pitfalls to consider when appraising a specific article. This is important because how a study is framed and the choice of methodology require some subjective interpretation. Fortunately, there are several instruments available for evaluating medical education research methods and providing a structured approach to the evaluation process.

The Medical Education Research Study Quality Instrument (MERSQI) 21   and the Newcastle Ottawa Scale-Education (NOS-E) 38   are two commonly used instruments, both of which have an extensive body of validity evidence to support the interpretation of their scores. Table 5 21 , 39   provides more detail regarding the MERSQI, which includes evaluation of study design, sampling, data type, validity, data analysis, and outcomes. We have found that applying the MERSQI to manuscripts, articles, and protocols has intrinsic educational value, because this practice of application familiarizes MERSQI users with fundamental principles of medical education research. One aspect of the MERSQI that deserves special mention is the section on evaluating outcomes based on Kirkpatrick’s widely recognized hierarchy of reaction, learning, behavior, and results ( table 5 ; fig .). 40   Validity evidence for the scores of the MERSQI include its operational definitions to improve response process, excellent reliability, and internal consistency, as well as high correlation with other measures of study quality, likelihood of publication, citation rate, and an association between MERSQI score and the likelihood of study funding. 21 , 41   Additionally, consequence validity for the MERSQI scores has been demonstrated by its utility for identifying and disseminating high-quality research in medical education. 42  

Fig. Kirkpatrick’s hierarchy of outcomes as applied to education research. Reaction = Level 1, Learning = Level 2, Behavior = Level 3, Results = Level 4. Outcomes become more meaningful, yet more difficult to achieve, when progressing from Level 1 through Level 4. Adapted with permission from Beckman and Cook, 2007.2

Kirkpatrick’s hierarchy of outcomes as applied to education research. Reaction = Level 1, Learning = Level 2, Behavior = Level 3, Results = Level 4. Outcomes become more meaningful, yet more difficult to achieve, when progressing from Level 1 through Level 4. Adapted with permission from Beckman and Cook, 2007. 2  

The Medical Education Research Study Quality Instrument for Evaluating the Quality of Medical Education Research

The Medical Education Research Study Quality Instrument for Evaluating the Quality of Medical Education Research

The NOS-E is a newer tool to evaluate the quality of medication education research. It was developed as a modification of the Newcastle-Ottawa Scale 43   for appraising the quality of nonrandomized studies. The NOS-E includes items focusing on the representativeness of the experimental group, selection and compatibility of the control group, missing data/study retention, and blinding of outcome assessors. 38 , 39   Additional validity evidence for NOS-E scores includes operational definitions to improve response process, excellent reliability and internal consistency, and its correlation with other measures of study quality. 39   Notably, the complete NOS-E, along with its scoring rubric, can found in the article by Cook and Reed. 39  

A recent comparison of the MERSQI and NOS-E found acceptable interrater reliability and good correlation between the two instruments 39   However, noted differences exist between the MERSQI and NOS-E. Specifically, the MERSQI may be applied to a broad range of study designs, including experimental and cross-sectional research. Additionally, the MERSQI addresses issues related to measurement validity and data analysis, and places emphasis on educational outcomes. On the other hand, the NOS-E focuses specifically on experimental study designs, and on issues related to sampling techniques and outcome assessment. 39   Ultimately, the MERSQI and NOS-E are complementary tools that may be used together when evaluating the quality of medical education research.

Conclusions

This article provides an overview of quantitative research in medical education, underscores the main components of education research, and provides a general framework for evaluating research quality. We highlighted the importance of framing a study with respect to purpose, conceptual framework, and statement of study intent. We reviewed the most common research methodologies, along with threats to the validity of a study and its measurement instruments. Finally, we identified two complementary instruments, the MERSQI and NOS-E, for evaluating the quality of a medical education research study.

Bordage G: Conceptual frameworks to illuminate and magnify. Medical education. 2009; 43(4):312–9.

Cook DA, Beckman TJ: Current concepts in validity and reliability for psychometric instruments: Theory and application. The American journal of medicine. 2006; 119(2):166. e7–166. e116.

Franenkel JR, Wallen NE, Hyun HH: How to Design and Evaluate Research in Education. 9th edition. New York, McGraw-Hill Education, 2015.

Hulley SB, Cummings SR, Browner WS, Grady DG, Newman TB: Designing clinical research. 4th edition. Philadelphia, Lippincott Williams & Wilkins, 2011.

Irby BJ, Brown G, Lara-Alecio R, Jackson S: The Handbook of Educational Theories. Charlotte, NC, Information Age Publishing, Inc., 2015

Standards for Educational and Psychological Testing (American Educational Research Association & American Psychological Association, 2014)

Swanwick T: Understanding medical education: Evidence, theory and practice, 2nd edition. Wiley-Blackwell, 2013.

Sullivan GM, Artino Jr AR: Analyzing and interpreting data from Likert-type scales. Journal of graduate medical education. 2013; 5(4):541–2.

Sullivan GM, Feinn R: Using effect size—or why the P value is not enough. Journal of graduate medical education. 2012; 4(3):279–82.

Tavakol M, Sandars J: Quantitative and qualitative methods in medical education research: AMEE Guide No 90: Part II. Medical teacher. 2014; 36(10):838–48.

Support was provided solely from institutional and/or departmental sources.

The authors declare no competing interests.

Citing articles via

Most viewed, email alerts, related articles, social media, affiliations.

  • ASA Practice Parameters
  • Online First
  • Author Resource Center
  • About the Journal
  • Editorial Board
  • Rights & Permissions
  • Online ISSN 1528-1175
  • Print ISSN 0003-3022
  • Anesthesiology
  • ASA Monitor

Silverchair Information Systems

  • Terms & Conditions Privacy Policy
  • Manage Cookie Preferences
  • © Copyright 2024 American Society of Anesthesiologists

This Feature Is Available To Subscribers Only

Sign In or Create an Account

Educational Membership icon

  • New! Member Benefit New! Member Benefit
  • Featured Analytics Hub
  • Resources Resources
  • Member Directory
  • Networking Communities
  • Advertise, Exhibit, Sponsor
  • Find or Post Jobs

Connect Icon

  • Learn and Engage Learn and Engage
  • Bridge Program

importance of quantitative research in education

  • Compare AACSB-Accredited Schools
  • Explore Programs

Bullseye mission icon

  • Advocacy Advocacy
  • Featured AACSB Announces 2024 Class of Influential Leaders
  • Diversity, Equity, Inclusion, and Belonging
  • Influential Leaders
  • Innovations That Inspire
  • Connect With Us Connect With Us
  • Accredited School Search
  • Accreditation
  • Learning and Events
  • Advertise, Sponsor, Exhibit
  • Tips and Advice
  • Is Business School Right for Me?

Recognizing the Value of Educational Research

Article Icon

  • A recent survey shows that research on teaching and learning is not valued at many AACSB-accredited schools across the U.S. and Canada.
  • One reason that business schools might not recognize research on teaching and learning is that the journal quality lists they commonly use to assess faculty intellectual contributions focus primarily on discipline-based scholarship.
  • STEM fields already place equal value on research on teaching and learning within individual disciplines. By following their lead, two Canadian scholars argue, business schools will enrich their students’ learning experiences.    

If business educators were asked to define the purpose of business schools, they likely would emphasize the need to “prepare the next generation of leaders.” But if this is the case, why do so few business schools prioritize research that advances teaching and curricular design?

Researcher Sanobar Siddiqui first explored this question as the subject of her doctoral dissertation. “One of my thesis findings was that the tenure system’s lack of rewards impedes business academics from pursuing research in teaching and learning,” she explains.

Now an assistant professor of accounting at the University of Regina’s Faculty of Business Administration in Canada, Siddiqui wanted to learn why so many business schools do not value research on teaching and learning (RoTL). This response is puzzling, she says, given that Standard 7 of the  AACSB Business Accreditation Standards  accepts “scholarship of teaching and learning” as documentation to indicate a business school’s teaching effectiveness and impact.

She and Camillo Lento, a professor with the Faculty of Business Administration at Lakehead University in Thunder Bay, Ontario, published a  paper  on the status of RoTL in the April 2022 edition of the International Journal of Educational Management . The paper’s findings are based on a survey in which Siddiqui and Lento asked educators two questions:

  • How do AACSB-accredited business schools in the U.S. and Canada define “teaching effectiveness,” according to AACSB’s Standard 7?
  • Do these schools consider research on teaching and learning in their promotion and tenure decisions?

This topic is particularly important, says Siddiqui, because business schools serve such diverse student audiences. Moreover, learner success is integral to every business school’s mission. Many of the instructional strategies “that we use in class are not research-informed or evidence-based. Hence, we are shortchanging our students,” she says. “Our teaching needs to catch up with the changes we see in our classroom.”

‘A Last Priority’

Siddiqui and Lento received 78 responses to their survey; in the second phase of their study, they conducted semi-structured interviews with 11 educators in the U.S. and Canada.

Among survey respondents, 42 percent noted that they were “unaware of an explicit teaching effectiveness definition” at their schools, but 58 percent said the policies in place at their schools communicated “an implied definition.” Only one respondent could quote a definition of teaching effectiveness from the school’s website.

Respondents noted a lack of “perceived respect and value” for RoTL, describing this line of scholarship as “a last priority” at their schools. As one educator put it, “Our department does not really care about teaching as long as you are cranking out strong scholarship.”

Schools that consider educational research for tenure and faculty qualification tend to focus on journal quality alone, not on whether published articles are discipline-based.

The good news is that 55 percent of respondents noted that their schools did take RoTL into account when making tenure decisions. Siddiqui and Lento found that these schools have two things in common. First, they focus on journal quality alone for the purposes of tenure and faculty qualification, not on whether faculty’s published articles are discipline-based.

Second, these schools are more likely to consider RoTL when faculty include this work “as part of a larger research plan that includes discipline-based research.” Only faculty following teaching tracks are likely to receive tenure based solely on publications in education-focused journals. 

Additionally, teaching-oriented schools are more likely than research-oriented schools to recognize RoTL. While this makes outward sense, Siddiqui wonders why prolific faculty who produce innovative scholarship on pedagogical issues that are critical to business education cannot “be hired, promoted, and awarded just like discipline-based researchers” at research-oriented institutions.

What Perpetuates the Stigma?

Siddiqui and Lento point to several factors that could be driving the lack of recognition of RoTL among AACSB-accredited schools:

No consensus about teaching quality.  Although many individual educational institutions have defined teaching effectiveness based on existing research, business schools have not yet established a shared definition of what constitutes effective teaching. However, the co-authors emphasize, more dedicated research could produce findings that inspire a common language around teaching and learning.

The complex nature of determining teaching quality. Schools often evaluate the quality of faculty’s research by whether the work appears in academic journals that are rated highly by certain  journal quality lists . However, they find they cannot use a similar approach to evaluate the quality of faculty’s teaching, says Lento. “The evaluation of teaching effectiveness is much more complex and requires many more sources of information, possibly compiled into a teaching dossier that is unique to an educator.”

A lack of attention in business doctoral programs. Most doctoral programs train young researchers to study topics related to their disciplines of choice. As a result of this early training, RoTL “may come with a stigma as it is outside of traditional discipline-specific research,” Lento says.

Lento admits that the reasons listed above are speculative. He and Siddiqui would like to see other researchers conduct follow-up studies that take deeper dives into the broader stigma surrounding RoTL.

Changing Mindsets, Taking Action

In the meantime, Siddiqui and Lento call on business school administrators and faculty to work together to create a “shared and precise definition of teaching effectiveness.” Educators can start by defining teaching quality within their own institutions.

From there, Siddiqui and Lento say that schools can take any or all the following actions to change mindsets about RoTL:

  • Set appropriate objectives, incentives, and evaluation mechanisms.
  • Create and nurture communities of practice that help like-minded faculty pursue research focused on solving issues they face in their classrooms.
  • Consider weighing education research in peer-reviewed articles more heavily, particularly for faculty in teaching-focused roles.
  • Recognize RoTL for accreditation and tenure and normalize it as a legitimate form of scholarship.
  • Make seed funds available to faculty who pursue RoTL.
  • Give awards and incentives to faculty who use research-informed teaching in their classrooms.
  • Consider hiring tenure-track academics who also are expert educators with an expressed interest in pursuing RoTL. These scholars can investigate and develop “research-informed teaching tools ready to be put into practice in almost any business classroom,” says Siddiqui. This outcome, she emphasizes, is an indication of how RoTL contributes to the advancement of business disciplines.
  • Encourage and teach RoTL in doctoral programs, with the aim of improving and advancing the quality of teaching at business schools.

Siddiqui points out that information on the websites of AACSB-accredited schools “are replete with research centers, research chairs and scholars, core research focus areas, research awards, annual research celebration reports, intellectual contributions, and grant-funding awards.”

There is no reason, she says, that schools could not also highlight information about their teaching philosophies, teaching awards, student feedback, educational leadership and professional development, and faculty research on teaching and learning.

Two B-School Perspectives

So far, Siddiqui and Lento’s paper has captured the attention of other like-minded educators in the business school community. This includes Nicola Charwat, associate dean of teaching and learning and senior lecturer of business law and taxation at Monash University’s Monash Business School (MBS) in Caulfield East, Australia.

MBS prioritizes scholarship on teaching and learning (SoTL) where appropriate, she says, through efforts that include identifying quality education-oriented journals and valuing publication in those journals equally to publication in discipline-based journals. The school uses “a consultative process” to identify journals specializing in teaching and learning that are equivalent to discipline-based journals rated as A*, A, B, and C on the quality list compiled by the Australian Business Deans Council.

“We have also instituted a Business Education and Research Group, which has been awarding both practice- and research-output-focused grants to staff for three years,” Charwat says. “Alongside these efforts, of course, there are moves in the university in line with the broader trend of raising the profile of teaching and ensuring its status is on par with other work of the university.”

Educators in STEM disciplines have long recognized educational research in tenure decisions and regularly reward academics who pursue RoTL in their disciplines.

Despite these changes, Charwat notes that the perception remains that accomplishments related to educational research are “somehow lesser” than those related to discipline-related scholarship. Additionally, many faculty remain uncertain about how to approach educational research. In response, MBS has built communities of practice dedicated to teaching and is now working “to increase awareness of and opportunities to undertake SoTL and education research,” Charwat says.

Charwat says that the questions raised in Siddiqui and Lento’s paper are “essential” to business education, and that their article “has prompted us to start exploring the patterns of our own SoTL and education research.” MBS faculty, she adds, might also pursue a similar study focused on AACSB-accredited schools in Australia. 

Another educator who read the article with interest is Martin Lockett, former dean and professor of strategic management at Nottingham University Business School China (NUBS China) in Zhejiang. Lockett explains that NUBS China uses the Academic Journal Guide , which is produced by the Chartered Association of Business Schools (CABS), to support tenure decisions and to classify faculty under AACSB accreditation standards.

But in the CABS guide, only four journals focused on teaching and learning are rated as 3, 4, or 4*, which are the targets that NUBS China uses to qualify faculty as Scholarly Academics under AACSB accreditation or for internal recognition of quality research, Lockett says.

This has led to worry among the school’s teaching-oriented faculty that if they focus on RoTL, they risk being classified as “additional faculty,” unless they can consistently publish in the few education-focused journals listed by CABS. That concern, Lockett says, deters most faculty from pursuing RoTL in any substantial way.

While this scenario is all too common at institutions with research-focused missions, it is not mandated by AACSB accreditation standards, emphasizes Stephanie Bryant, AACSB’s chief accreditation officer. She clarifies that whether a business school considers educational scholarship for the purpose of accreditation or tenure is its choice, based on the parameters it has set for its individual mission. “The standards do not say anywhere, or imply, that educational research is not valued,” Bryant stresses. The devaluation of RoTL, she adds, “is a school perspective.”

Time to ‘Balance the Scales’

The stigma surrounding RoTL at AACSB-accredited business schools could be lifted, say Siddiqui and Lento, if administrators acknowledge the benefits that fostering cultures of teaching and learning bring to all business school stakeholders. These advantages include a wider scope of scholarship and more evidence-based pedagogical tools for faculty, richer learning experiences and better learning outcomes for students, and more well-rounded job candidates for employers.

Educators in science, technology, engineering, and mathematics (STEM) disciplines already know this, says Siddiqui. STEM departments have long recognized educational research in tenure decisions and regularly reward academics who pursue RoTL in their disciplines.

As one example, Siddiqui points to Carl Edwin Wieman, winner of the 2001 Nobel Peace Prize in Physics. Wieman established the  Carl Wieman Science Education Initiative  at the University of British Columbia in Canada to encourage evidence-based teaching methods focused on improving undergraduate science education. Since its inception, the initiative has hired fellows who are interested in conducting education research, particularly based in the disciplines in which they have earned their doctorates. It also has inspired the creation of teaching materials in science education, a dedicated website, and a sister initiative at the University of Colorado Boulder in the United States.

Business schools, says Siddiqui, could achieve comparable results by raising awareness of the importance of RoTL, disseminating RoTL findings beyond peer-reviewed journals, and driving research-informed teaching methods that advance business education.

This year, the co-authors published a second paper that finds that scholarly and practice academics who developed rigorous research skills in their doctoral programs and who publish discipline-based research are more likely to pursue RoTL research. Here, Siddiqui and Lento more directly call on business school deans to reward and incentivize this line of research by creating communities of practice and expanding their journal ranking frameworks to include relevant peer-reviewed publications.

It is imperative, Siddiqui and Lento argue, that business schools place studies based on classroom settings on equal footing with studies based on corporate settings. “Research on teaching and learning balances the scales,” Siddiqui says, “by utilizing evidence-based, efficient, and effective teaching to foster deep learning amongst diverse student audiences.”

  • accreditation
  • administration
  • faculty engagement

Video Icon

  • Open access
  • Published: 06 May 2024

Breaking bad news: A mix methods study reporting the need for improving communication skills among doctors in Pakistan

  • Muhammad Ahmed Abdullah 1 ,
  • Babar Tasneem Shaikh 1 ,
  • Kashif Rehman Khan 2 &
  • Muhammad Asif Yasin 3  

BMC Health Services Research volume  24 , Article number:  588 ( 2024 ) Cite this article

56 Accesses

Metrics details

Effective skills and training for physicians are essential for communicating difficult or distressing information, also known as breaking bad news (BBN). This study aimed to assess both the capacity and the practices of clinicians in Pakistan regarding BBN.

A cross-sectional study was conducted involving 151 clinicians. Quantitative component used a structured questionnaire, while qualitative data were obtained through in-depth interviews with 13 medical educationists. The responses were analyzed using descriptive statistics and thematic analysis.

While most clinicians acknowledged their responsibility of delivering difficult news, only a small percentage had received formal training in BBN. Areas for improvement include time and interruption management, rapport building, and understanding the patients’ point of view. Prognosis and treatment options were not consistently discussed. Limited importance is given to BBN in medical education.

Training in BBN will lead to improved patient and attendants’ satisfaction, and empathetic support during difficult times.

Peer Review reports

Introduction

The duties of physicians extend beyond providing an effective treatment to patients; they also encompass the development of strong communication skills and the establishment of trust with their patients [ 1 ]. This emphasis on communication is crucial as it enables patients to cope with the seriousness and severity of their illnesses, to make informed decisions regarding treatment options, and to manage potential side effects [ 2 ]. In recent years, there has been a shift in medical practice from a doctor-centered approach to a patient-centered one, where patients play a significant role in the decision-making process, ultimately leading to increased patient satisfaction [ 3 ]. However, physicians may find themselves burdened when faced with the task of breaking bad news, fearing the potential reactions of their patients [ 4 , 5 ]. Neglecting to address this challenge can have negative consequences in terms of patient-centered healthcare, as physicians’ reluctance to disclose a bad news may compromise mental and physical well-being of the patients, and at times of the family members too [ 6 ]. On the other hand, physicians are being uncomfortable with their own emotions and do not have enough coping skills to manage their emotions in the moment [ 7 ].

Research studies have documented the lack of training and protocols among doctors for breaking bad news. For instance, a research from Brazil revealed that none of the clinicians at a university hospital were aware of any specific protocol or guidelines for this purpose [ 5 ]. Similarly, in Canada and South Korea, physician training in breaking bad news is reported to be insufficient, and in many underdeveloped countries, it is virtually non-existent despite curricular reforms [ 8 ]. In Northern Portugal, a significant number of family physicians expressed apprehension about breaking bad news and deemed training in this area necessary [ 9 ]. In Iran, inadequate training was identified as the main reason behind physicians’ difficulty and fear in delivering bad news to patients, emphasizing the need for formal training in this domain [ 1 ]. In India, one research documented diverse opinions among oncologists regarding breaking bad news and sharing information with patients, accenting the necessity for physician training in this aspect [ 10 ]. Additionally, a study conducted in Pakistan identified a common reason for increasing violence against healthcare providers as the failure to communicate bad news in a timely and appropriate manner, highlighting the need for better preparation and communication skills during this process [ 4 ]. Several protocols and guidelines have been developed for breaking bad news, with the SPIKES protocol being one of the most widely used due to its comprehensive coverage of essential aspects, particularly the emotional aspect of the process [ 11 ]. This Six-Step Protocol for Delivering Bad News is SPIKES: S for setting up the meeting, P is assessing the patient’s perception, I for achieving the patient’s invitation, K is providing knowledge and information to the patient, E is addressing the patient’s emotions with empathic responses and S for strategy and summary.

Despite the recommendations of the Pakistan Medical and Dental Council to incorporate communication skills into formal medical curricula, and the ongoing discussions regarding medical curricular reforms in Pakistan over the past two decades, little progress has been made in this regard. This lack of action is evident from a recent study conducted in Peshawar, Pakistan [ 12 ]. Thus, the aim of our study was to assess the training as well as the practices of clinicians in Pakistan regarding BBN and provide recommendations for improvement.

Study design

This mixed methods study utilized a cross-sectional design to assess the training and practices of doctors in BBN. The study was conducted at five tertiary care hospitals located in the twin cities of Islamabad and Rawalpindi, namely, Akbar Niazi Teaching Hospital, Benazir Bhutto Hospital, Holy Family Hospital, NESCOM Hospital, and Combined Military Hospital. The data collection period was eight weeks in the first quarter of 2023 to ensure an adequate sample size and data representation. The study participants selected through a simple random sampling included medical personnel directly involved in healthcare delivery within the selected hospitals with a minimum of six months of clinical experience. Medical students and Basic Health Sciences faculty were excluded from the study sample.

Data collection

To collect the necessary data, a 25-item self-administered questionnaire was developed. The questionnaire encompassed two main sections. The first section focused on recording participants’ demographic information, including age, gender, designation, specialty, and years of experience. This section aimed to establish a comprehensive profile of the participating doctors, providing a contextual background for the subsequent analysis of their responses. The second section of the questionnaire delved into the participants’ knowledge and practices related to breaking bad news, drawing from the established SPIKES protocol [ 11 ]. This section comprised a series of questions designed to assess the doctors’ familiarity with the protocol, their adherence to its guidelines, and their overall comfort level in delivering challenging news to patients and their families. The SPIKES protocol, which stands for Setting, Perception, Invitation, Knowledge, Emotions, and Strategy, is a widely recognized framework for effective communication during difficult conversations. Before administering the questionnaire, a pilot study was conducted with ten doctors working in general practice clinics, in Rawalpindi/Islamabad, to ensure its clarity, comprehensibility, and relevance to the research objectives. Feedback from the pilot study participants was incorporated into the final version of the questionnaire to enhance its validity and reliability.

Sample size calculation

The sample size for this study was determined based on a 95% confidence level, considering a hypothesized population proportion of 11% with a 5% margin of error. The anticipated frequency of this outcome factor was derived from a previous study [ 13 ]. The population size was estimated to be 200,000. Using the formula for sample size calculation for frequency in a population (n = [DEFF * N * p * (1-p)] / [(d^2 / Z^2) * (N-1) + p * (1-p)]), where DEFF represents the design effect, N is the population size, p is the hypothesized proportion, d is the margin of error, and Z is the critical value corresponding to the desired confidence level, the required sample size was determined to be approximately 151 participants.

Data analysis and synthesis

After data collection, the collected data were subjected to comprehensive analysis using SPSS version 22.0. Descriptive statistics, such as frequencies and percentages were computed to summarize the data and gain insights into the training and practices of doctors in breaking bad news.

The qualitative part of the study aimed to gain insights into the practices and challenges associated with breaking bad news in a healthcare setting. The qualitative data were gathered through in-depth interviews with 13 medical educationists from Pakistan. Each interview lasted between 30 and 45 min and took place in the office spaces of the participants to ensure privacy and confidentiality. The participants were individuals who had been involved in teaching medicine for at least 5 years, including 6 clinicians, 4 individuals from medical education, and 3 from basic sciences departments. The interviews were conducted by the principal investigator along with a medical student who accompanied as a note-taker. Rigorous note-taking was done during the interviews to capture detailed information, and where possible, the interviews were audio recorded and later transcribed for analysis. The Braun and Clarke’s thematic analysis method was used as an iterative process which consisted of six steps: (1) becoming familiar with the data, (2) generating codes, (3) generating themes, (4) reviewing themes, (5) defining and naming themes, and (6) locating exemplars [ 14 ]. The analysis was conducted by carefully reading and familiarizing with the interview transcripts. Codes were generated to label and categorize meaningful segments of data, which were refined and grouped into broader themes. The research team engaged in discussions to validate the emerging themes and ensure the reliability of the analysis.

Demographic data of the participants showed that out of the total 151 respondents males were greater in number than females (62.3%), mean age was 30.7(± 8.6 SD) years and the proportion of house officers was the highest, as shown in Table  1 . Response rate of the employees of private hospitals was higher than that of the public sector and there were graduates from several medical institutions all over Pakistan.

Table  2 illustrates the responses to various questions related to BBN. Out of the total respondents, 74% reported that BBN was included in their daily duties, indicating that a significant majority of doctors in Pakistan are involved in delivering difficult news to their patients. However, only 9% of the participants reported receiving training specifically focused on BBN, while the remaining 91% had not received such training.

When considering the tenure of the BBN training, a small percentage of doctors (2%) reported receiving training during their MBBS education, followed by 3% during their house job, and 3% during postgraduate training. Surprisingly, the majority of respondents (92%) relied on personal experience rather than formal training to navigate the challenges of BBN. Regarding the availability of formal guidelines for BBN, only 10% of the participants reported having access to such guidelines, while the majority (90%) did not have formal guidelines to follow.

Maintaining privacy during the process of BBN was reported by 14% of the participants, indicating that privacy considerations may not be adequately addressed in some healthcare settings. Similarly, patient attendants’ involvement during the BBN was reported by 78% of the respondents, suggesting that involving family members or caregivers in the process is common.

When it comes to communication techniques during BBN, 64% of doctors reported sitting while delivering the news, while 36% did not. Time and interruption management, rapport building, patient perception exploration, and adequate patient speaking time were areas where improvements were needed, as reported by the participants.

Furthermore, while 52% of the respondents reported avoiding excessive bluntness and handling emotions appropriately, a considerable portion (48%) did not prioritize these aspects. Identification of emotional state, empathic response, and providing time for personal expression were areas where improvements were necessary, as reported by the participants. Moreover, the participants acknowledged the importance of avoiding jargon and technical terms (44%) and breaking the information into small chunks (45%) to enhance patient understanding. However, further efforts were needed to ensure that hopelessness was avoided during the conversation (50%).

Regarding prognosis and treatment options, 20% of the doctors reported discussing these aspects during BBN conversations, indicating that there is room for improvement in ensuring comprehensive information delivery and empathetic counseling.

In summary, the results highlight several areas where training and guidelines for BBN in Pakistan can be improved. The majority of doctors rely on personal experience rather than formal training, indicating a need for structured educational programs and guidelines in this critical area of healthcare communication. Privacy considerations, effective communication techniques, and emotional support for patients were identified as areas that require further attention and development. The findings emphasize the importance of enhancing training and providing formal guidelines to equip doctors with the necessary skills and strategies for delivering difficult news effectively and compassionately.

The qualitative component of the study involved in-depth interviews with 13 medical educationists from Pakistan. These interviews aimed to explore the level and standard of training on BBN in the curriculum and training of doctors in Pakistan. The interviews revealed several key themes that shed light on the current state of training and education in this area.

Theme 1: ambiguity in subject domains and integration of communication skills

The medical educationists expressed concerns regarding the lack of clarity in subject domains and the integration of communication skills into the medical curriculum. They suggested that communication skills, including BBN, should be incorporated into the community medicine curriculum. Furthermore, they proposed the introduction of family medicine as a dedicated subject at the undergraduate level, which would provide comprehensive training in communication skills and prepare doctors to handle sensitive conversations effectively.

One interviewee highlighted, “There is a lack of clarity when it comes to subject domains and the inclusion of communication skills in our medical curriculum. We believe that communication skills, including breaking bad news, should be integrated into the community medicine curriculum. Additionally, introducing family medicine as a dedicated subject at the undergraduate level would ensure that doctors receive extensive training in effective communication, addressing the emotional needs of patients and their families.” [P6].

This theme emphasizes the need for clear subject domains and the integration of communication skills including BBN within medical education. The proposal to introduce family medicine as an undergraduate subject reflects a holistic approach to training future doctors in effectively delivering difficult news and addressing the diverse needs of patients and their families.

Theme 2: limited importance of breaking bad news in medical education

The medical educationists expressed that at present BBN does not hold a significant place in the teaching and training of doctors in Pakistan. The focus is primarily on technical clinical knowledge and skill development, often neglecting important soft skills such as communication skills, research skills, and logistics. This lack of emphasis on communication training implies that doctors may not be adequately prepared to handle the complexities of BBN and managing the subsequent situations effectively.

During the interviews, one medical educationist highlighted, “In our curriculum, there is a major gap when it comes to training doctors in breaking bad news. The focus is more on technical aspects, and soft skills like communication are often overlooked. This can lead to doctors struggling in delivering difficult news and navigating the emotional complexities that follow.“ [P1].

The participants also expressed concerns about the limited exposure and opportunities for doctors to stay up to date with constantly evolving medical knowledge. They emphasized the importance of continuous professional development to ensure doctors are equipped with the latest information and best practices in BBN effectively.

One interviewee shared, “It is crucial for doctors to have appropriate exposure to stay updated with the latest medical knowledge. Breaking bad news requires not only clinical expertise but also an understanding of the emotional and psychological aspects. Continuous professional development programs can help doctors refine their skills and keep abreast of the advancements in this field.” [P3].

Theme 3: learning by example and long-term impact of communication

The interviewees emphasized that BBN cannot be solely taught through theoretical instruction but should be demonstrated through practical examples and role modeling. They highlighted the significance of the communication process itself, as it can have long-term effects on the lives of patients and their families.

An interviewee emphasized, “It’s not just about teaching the process of breaking bad news; it’s about demonstrating empathy, active listening, and providing support throughout the entire journey. Learning by example and observing experienced doctors can be invaluable in developing the necessary communication skills. We must realize that the way we communicate with people during difficult times can have a profound impact on their well-being.” [P2].

Theme 4: lack of standardized training and guidelines

The medical educationists highlighted the absence of standardized training programs and guidelines specifically tailored to breaking bad news in Pakistan. They emphasized the need for a structured curriculum that includes comprehensive training modules and clear guidelines to ensure consistent and effective communication when delivering difficult news.

One interviewee stated, “There is a lack of standardized training and guidelines for breaking bad news in our medical education system. Without a structured curriculum and clear guidelines, doctors may face challenges in approaching these sensitive conversations. Establishing standardized training programs would provide doctors with the necessary tools and frameworks to navigate such situations effectively.” [P4].

Theme 5: inter-professional collaboration and team-based approach

The interviewees emphasized the importance of inter-professional collaboration and a team-based approach in BBN. They highlighted the need for effective communication and coordination among healthcare professionals, including doctors, nurses, psychologists, and social workers, to provide comprehensive support to patients and their families.

One medical educationist shared, “Breaking bad news is a complex process that requires a team-based approach. It is crucial for doctors to collaborate with other healthcare professionals, such as nurses, psychologists, and social workers, to ensure holistic care and support for patients and their families. Promoting effective inter-professional communication is essential in delivering sensitive news with empathy and addressing the diverse needs of patients.” [P7].

The present study aimed to explore the practices and training of clinicians in BBN to patients and their care givers in Pakistan. The combination of quantitative and qualitative findings, along with comparisons drawn from other studies conducted in developing countries, provides a comprehensive understanding of the current state of BBN practices and training in Pakistan and its relation to similar contexts.

Breaking bad news is part of the daily duties of almost all the clinicians. A study conducted in Sudan found that 56% of physicians had received training in BBN, indicating a relatively lower percentage compared to our study [ 15 ]. Similarly, a study from Ethiopia reported that 82% of participant physicians were not even aware of the SPIKES protocol, and 84% had no formal or informal training in BBN [ 8 ]. These findings suggest that the level of training and awareness regarding BBN varies across different developing countries. In our study revealed that only 9% of the participants reported receiving formal training specifically focused on BBN. This finding is consistent with studies conducted in other developing countries. For instance, a study from Lahore, Pakistan, involving postgraduate trainees, found a lack of knowledge and low satisfaction regarding BBN skills [ 16 ]. Similarly, a study in Peshawar, Pakistan, reported that 95% of participants had no training in BBN, highlighting a common gap in training among healthcare professionals [ 12 ]. Despite the fact that there is no formal training on BBN, the self-reported data in our study is quite positive.

The qualitative component of the study added valuable insights to complement the quantitative findings. Through in-depth interviews, participants’ experiences, perspectives, and challenges regarding BBN were explored. This approach provided a deeper understanding of the participants’ thoughts, emotions, and contextual factors influencing their communication practices. Themes and patterns emerged, offering a nuanced understanding of the quantitative results. The qualitative component also captured participants’ perceptions of training effectiveness, suggestions for improvement, and barriers to implementing optimal communication practices. Nonetheless, respondents were of the view that either at undergraduate or as part of the continuing education, inclusion of BBN training must be considered and that there should be a structured curriculum. However, there is an incongruent viewpoint too where some respondents said that skills of BBN come with experiential learning and maturity, and that it is about exhibiting one’s empathetic attitude and care during difficult times. This mixed methods approach allowed for a comprehensive examination of the research questions, generating practical implications for improving physician practices in breaking bad news [ 16 , 17 ].

Comparisons drawn from other developing countries also highlight the need for standardized training programs and guidelines for BBN. For instance, according to one research, adherence to the SPIKES protocol varied among participants, with 35–79% claiming to follow the protocol in routine practice [ 15 ]. Similarly, a study in Ethiopia found that a significant percentage of physicians were not complying with the guidelines of BBN [ 17 ]. These findings indicate the need for structured curricula and clear guidelines to ensure consistent and effective communication skills amongst doctors [ 18 ]. The importance of paying enough attention to the emotions of the recipient and the need to provide support after breaking bad news cannot be undermined at all [ 19 ]. A cultural shift is required within the medical profession and healthcare more generally so that BBN is viewed not merely as a soft skill but a professional responsibility for the doctor and a right for the patients and families who wish to have it [ 20 ].

Limitations

Our study has few limitations too. Very few participants were of the consultant cadre, most of the responded were junior doctors. Patients as well as the care givers are important stakeholders in this issue. Their views and perceptions were not explored in qualitative component of the study.

This study offers valuable insights into the practices and training of clinicians involved in BBN in Pakistan. Comparisons with other studies conducted in developing countries reveal both similarities and differences in BBN practices and training. The findings underscore the necessity of standardized training programs, formal guidelines, and improved communication skills education within medical curricula across developing nations. Recommendations arising from this study include integrating communication skills into the medical curriculum, developing standardized training programs, promoting continuous professional development, fostering inter-professional collaboration, and recognizing the importance of communication skills. By taking these steps, healthcare professionals will be equipped with the necessary tools to navigate the complexities of breaking bad news effectively and to provide compassionate care. Collaboration among medical institutions, policymakers, and regulatory bodies is essential to prioritize communication skills training, establish clear guidelines, and emphasize the value of empathetic and effective communication. Efforts should be directed towards increasing awareness, providing comprehensive training, and emphasizing the significance of effective communication when delivering difficult news, thus ensuring optimal patient care and support during challenging situations. Implementation of these recommendations will enhance the delivery of difficult news, increase patient satisfaction, and ensure comprehensive support during challenging times.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Biazar G, Delpasand K, Farzi F, Sedighinejad A, Mirmansouri A, Atrkarroushan Z. Breaking bad news: a valid concern among clinicians. Iran J Psychiatry. 2019;14(3):198–202.

PubMed   PubMed Central   Google Scholar  

Ishaque S, Saleem T, Khawaja FB, Qidwai W. Breaking bad news: exploring patient’s perspective and expectations. J Pak Med Assoc. 2009;60(5):1–3.

Google Scholar  

Servotte JC, Bragard I, Szyld D, Van Ngoc P, Scholtes B, Van Cauwenberge I, et al. Efficacy of a short role-play training on breaking bad news in the emergency department. West J Emerg Med. 2019;20(6):893–902.

Article   PubMed   PubMed Central   Google Scholar  

Baig L, Tanzil S, Ali SK, Shaikh S, Jamali S, Khan M. Breaking bad news: a contextual model for Pakistan. Pakistan J Med Sci. 2018;34(6):1336–40.

Ferreira Da Silveira FJ, Botelho CC, Valadão CC. Dando más notícias: a habilidade dos médicos em se comunicar com os pacientes. Sao Paulo Med J. 2017;135(4):323–31.

Article   PubMed   Google Scholar  

Basheikh M. Preferences of the Saudi Population in breaking bad medical news: a regional study. Cureus. 2021;13(11).

Silva JV, Carvalho I. Physicians experiencing intense emotions while seeing their patients: what happens? Perm J. 2016 Summer;20(3):15–229.

Fisseha H, Mulugeta W, Kassu RA, Geleta T, Desalegn H. Perspectives of protocol based breaking bad news among medical patients and physicians in a teaching hospital, Ethiopia. Ethiop J Health Sci. 2020;30(6):1017–26.

Ferraz Gonçalves JA, Almeida C, Amorim J, Baltasar R, Batista J, Borrero Y, et al. Family physicians’ opinions on and difficulties with breaking bad news. Porto Biomed J. 2017;2(6):277–81.

Kumar M, Goyal S, Singh K, Pandit S, Sharma DN, Verma AK, et al. Breaking bad news issues: a survey among radiation oncologists. Indian J Palliat Care. 2009;15(1):61–6.

Setubal MSV, Antonio MÂRGM, Amaral EM, Boulet J. Improving perinatology residents’ skills in breaking bad news: a randomized intervention study. Rev Bras Ginecol E Obstet. 2018;40(3):137–46.

Article   Google Scholar  

Jameel A, Noor SM, Ayub S. Survey on perceptions and skills amongst postgraduate residents regarding breaking bad news at teaching hospitals in Peshawar, Pakistan. J Pak Med Assoc. 2012;62(8):585–9.

PubMed   Google Scholar  

Ali AA. Communication skills training of undergraduates. J Coll Physicians Surg Pak. 2013;23(1):10–5.

Byrne D. A worked example of Braun and Clarke’s approach to reflexive thematic analysis. Qual Quant. 2022;56:1391–412.

Abdalazim Dafallah M, Ahmed Ragab E, Hussien Salih M, Nuri Osman W, Omer Mohammed R, Osman M, et al. Breaking bad news: awareness and practice among Sudanese doctors. AIMS Public Health. 2020;7(4):758–68.

Sarwar MZ, Rehman F, Fatima SM, Suhail M, Naqi SA. Breaking bad news skill of postgraduate residents of tertiary care hospital of Lahore, Pakistan: a cross-sectional survey. J Pak Med Assoc. 2019;69(5):695–9.

Ali Khawaja RD, Akhtar W, Khawaja A, Irfan H, Naeem M, Memon M. Patient communication in radiology: current status of breaking bad news among radiologists and radiology trainees in Pakistan. J Coll Physicians Surg Pakistan. 2013;23(10):761–3.

Tran TQ, Scherpbier AJJA, van Dalen J, Dung DV, Elaine PW. Nationwide survey of patients’ and doctors’ perceptions of what is needed in doctor - patient communication in a southeast Asian context. BMC Health Serv Res. 2020;20:946.

Jalali R, Jalali A, Jalilian M. Breaking bad news in medical services: a comprehensive systematic review. Heliyon. 2023;9(4):e14734.

O’Mahony S. Reframing the ‘difficult conversation’. J Royal Coll Physicians Edin. 2022;52(2):93–4.

Download references

Acknowledgements

Authors acknowledge the time given by the participants to answer our study questions and also for validating the transcripts.

No funding was obtained or used in this research.

Author information

Authors and affiliations.

Health Services Academy, Park Road, Chak Shahzad, Islamabad, 44000, Pakistan

Muhammad Ahmed Abdullah & Babar Tasneem Shaikh

Federal Polyclinic Hospital, Islamabad, Pakistan

Kashif Rehman Khan

Indus Healthcare Network, Islamabad, Pakistan

Muhammad Asif Yasin

You can also search for this author in PubMed   Google Scholar

Contributions

MAA & BTS were involved in conception and design of the study; MAA, KRK and MAY did the data collection, analysis and interpretation of the literature; and later developed the first draft of the paper; BTS helped in triangulation and contributed in revising it critically for substantial intellectual content and for adding references. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Babar Tasneem Shaikh .

Ethics declarations

Ethics approval and consent to participate.

The researchers obtained ethical approval from the Institutional Review Board-Research Committee (IRB-RC) of Islamabad Medical & Dental College, ensuring compliance with ethical guidelines and safeguarding the rights and well-being of the study participants. Akbar Niazi Teaching Hospital is an affiliated teaching hospital of the Islamabad Medical & Dental College; whereas for the remaining hospitals separate letters were written and permission to conduct the study was sought. All participants were provided with information about the study objectives and procedures, and their informed consent was obtained prior to their inclusion in the research.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Abdullah, M.A., Shaikh, B.T., Khan, K.R. et al. Breaking bad news: A mix methods study reporting the need for improving communication skills among doctors in Pakistan. BMC Health Serv Res 24 , 588 (2024). https://doi.org/10.1186/s12913-024-11056-2

Download citation

Received : 31 January 2024

Accepted : 29 April 2024

Published : 06 May 2024

DOI : https://doi.org/10.1186/s12913-024-11056-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Breaking Bad News
  • Communication skills
  • Counselling
  • Physicians’ training, Pakistan

BMC Health Services Research

ISSN: 1472-6963

importance of quantitative research in education

IMAGES

  1. Quantitative Research Method Meaning And Types Teaching Resources

    importance of quantitative research in education

  2. Characteristics and Importance of Quantitative Research

    importance of quantitative research in education

  3. Module 2: The Importance of Quantitative Research Across Various Fields

    importance of quantitative research in education

  4. PPT

    importance of quantitative research in education

  5. Quantitative Research: What it is, Tips & Examples

    importance of quantitative research in education

  6. Purpose of a Quantitative Methodology (With images)

    importance of quantitative research in education

VIDEO

  1. Understanding Quantitative and Qualitative Research Method

  2. The Importance of Quantitative Research Across Fields || Practical Research 2 || Quarter 1/3 Week 2

  3. Quantitative and Qualitative research in research psychology

  4. Writing the Discussion & Conclusion Section for a Quantitative Paper

  5. Exploring Qualitative and Quantitative Research Methods and why you should use them

  6. Quantitative Research: Its Characteristics, Strengths, and Weaknesses

COMMENTS

  1. (PDF) Quantitative Research in Education

    The. quantitative research methods in education emphasise basic group designs. for research and evaluation, analytic metho ds for exploring re lationships. between categorical and continuous ...

  2. Critical Quantitative Literacy: An Educational Foundation for Critical

    For applied quantitative research in education to become more critical, it is imperative that learners of quantitative methodology be made aware of its historical and modern misuses. This directive calls for an important change in the way quantitative methodology is taught in educational classrooms. ... Most important to Baez was how research ...

  3. PDF The Vital Role of Research in Improving Education

    The Value of Education Research States and the federal government have a legal and ethical obligation to provide high-quality educational opportunities for their students. Far from being unrelated to states' and districts' core education functions, research plays a unique and integral role in identifying best practices, applying resources

  4. Quantitative Research Designs in Educational Research

    Introduction. The field of education has embraced quantitative research designs since early in the 20th century. The foundation for these designs was based primarily in the psychological literature, and psychology and the social sciences more generally continued to have a strong influence on quantitative designs until the assimilation of qualitative designs in the 1970s and 1980s.

  5. Quantitative research in education : Background information

    Educational research has a strong tradition of employing state-of-the-art statistical and psychometric (psychological measurement) techniques. Commonly referred to as quantitative methods, these techniques cover a range of statistical tests and tools. The Sage encyclopedia of educational research, measurement, and evaluation by Bruce B. Frey (Ed.)

  6. PDF Introduction to quantitative research

    Quantitative research is 'Explaining phenomena by collecting numerical data that are analysed using mathematically based methods (in particu-lar statistics)'. Let's go through this definition step by step. The first element is explaining phenomena. This is a key element of all research, be it quantitative or quali-tative.

  7. Chapter 1 Quantitative research in education: Impact on evidence-based

    Quantitative research is based on epistemic beliefs that can be traced back to David Hume. Hume and others who followed in his wake suggested that we can never directly observe cause and effect. ... Brigham, F.J. (2010), "Chapter 1 Quantitative research in education: Impact on evidence-based instruction", Obiakor, F.E., Bakken, J.P. and ...

  8. Conducting Quantitative Research in Education

    This book presents a clear and straightforward guide for all those seeking to conduct quantitative research in the field of education, using primary research data samples. It provides educational researchers with the tools they can work with to achieve results efficiently.

  9. Quantitative Research in Education

    Quantitative Research in Education: A Primer, Second Edition is a brief and practical text designed to allay anxiety about quantitative research. Award-winning authors Wayne K. Hoy and Curt M. Adams first introduce readers to the nature of research and science, and then present the meaning of concepts and research problems as they dispel ...

  10. Quantitative Research in Education : A Primer

    Quantitative Research in Education: A Primer, Second Edition is a brief and practical text designed to allay anxiety about quantitative research. Award-winning authors Wayne K. Hoy and Curt M. Adams first introduce readers to the nature of research and science, and then present the meaning of concepts and research problems as they dispel notions that quantitative research is too difficult, too ...

  11. What Is Quantitative Research?

    Revised on June 22, 2023. Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations. Quantitative research is the opposite of qualitative research, which involves collecting and analyzing ...

  12. Assessing the Quality of Education Research Through Its Relevance to

    The "what works" movement in education policy propelled by the passage of the NCLB has, "privileged causal research—work that uses social experiments or advanced quantitative methods to carefully identify the causal effects of programs and policies—rather than descriptive, correlational, and qualitative research" (Polikoff & Conaway ...

  13. PDF The Usefulness of Qualitative and Quantitative Approaches and ...

    for the understanding of the nature of qualitative and quantitative research approaches used in educational research today. The paradigms are characterized by the methods of data collection and analysis as well as methodological approaches to research which has been generating much controversy among researchers. Bryman (2008, p22-23) argues ...

  14. IMPORTANCE OF QUANTITATIVE ANALYSIS IN EDUCATION (TEACHING

    Quantitative research is the systematic empirical investigation of observable phenomena via statistical, mathematical or computation techniques. Quantitative research is explaining phenomena by collecting numerical data that are analysed using mathematical based methods. (Aliaga and Gunderson, 2000).

  15. Quantitative Research in Research on the Education and Learning of

    This chapter starts from the observation that there is a limited presence of quantitative research published in leading adult education journals such as Adult Education Quarterly, Studies in Continuing Education and International Journal of Lifelong Learning.This observation was also discussed by Fejes and Nylander (2015, see also Chap. 7).As an adult education scholar mainly working with ...

  16. Quantitative Research in Education

    The quantitative research methods in education emphasise basic group designs for research and evaluation, analytic methods for exploring relationships between categorical and continuous measures, and statistical analysis procedures for group design data. The essential is to evaluate quantitative analysis and provide the research process ...

  17. Data Collection in Educational Research

    Historically, much of the data collection performed in educational research depended on methods developed for studies in the field of psychology, a discipline which took what is termed a "quantitative" approach. This involves using instruments, scales, Tests, and structured observation and interviewing. By the mid- to late twentieth ...

  18. Quantitative research in education : Recent e-books

    David Gibson (Ed.) Publication Date: 2020. The book aims to advance global knowledge and practice in applying data science to transform higher education learning and teaching to improve personalization, access and effectiveness of education for all. Currently, higher education institutions and involved stakeholders can derive multiple benefits ...

  19. Qualitative vs. Quantitative Research: Comparing the Methods and

    No matter the field of study, all research can be divided into two distinct methodologies: qualitative and quantitative research. Both methodologies offer education researchers important insights. Education research assesses problems in policy, practices, and curriculum design, and it helps administrators identify solutions. Researchers can ...

  20. A Practical Guide to Writing Quantitative and Qualitative Research

    In quantitative research, ... To construct effective research questions and hypotheses, it is very important to 1) ... " The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) ...

  21. (PDF) Conducting Quantitative Research in Education

    This book provides a clear and straightforward guide for all those seeking to conduct quantitative research in the field of education, using primary research data samples. While positioned as less ...

  22. Why Is Quantitative Research Important?

    Advantages of Quantitative Research. Quantitative researchers aim to create a general understanding of behavior and other phenomena across different settings and populations. Quantitative studies are often fast, focused, scientific and relatable. 4. The speed and efficiency of the quantitative method are attractive to many researchers.

  23. How Do I Critically Consume Quantitative Research?

    The backbone of quantitative research is data. In order to have any data, participants or cases must be found and measured for the phenomena of interest. These participants are all unique, and it is this uniqueness that needs to be disclosed to the reader.

  24. Quantitative Research Methods in Medical Education

    This article provides an overview of quantitative research in medical education, underscores the main components of education research, and provides a general framework for evaluating research quality. We highlighted the importance of framing a study with respect to purpose, conceptual framework, and statement of study intent.

  25. Recognizing the Value of Educational Research

    Two researchers want to persuade more business schools to encourage, support, and reward faculty who conduct rigorous research on teaching and learning. A recent survey shows that research on teaching and learning is not valued at many AACSB-accredited schools across the U.S. and Canada. One reason that business schools might not recognize ...

  26. Breaking bad news: A mix methods study reporting the need for improving

    Background Effective skills and training for physicians are essential for communicating difficult or distressing information, also known as breaking bad news (BBN). This study aimed to assess both the capacity and the practices of clinicians in Pakistan regarding BBN. Methods A cross-sectional study was conducted involving 151 clinicians. Quantitative component used a structured questionnaire ...