Loading metrics

Open Access

Guidelines and Guidance

The Guidelines and Guidance section contains advice on conducting and reporting medical research.

See all article types »

Using Qualitative Evidence in Decision Making for Health and Social Interventions: An Approach to Assess Confidence in Findings from Qualitative Evidence Syntheses (GRADE-CERQual)

* E-mail: [email protected]

Affiliations Global Health Unit, Norwegian Knowledge Centre for the Health Services, Oslo, Norway, Health Systems Research Unit, South African Medical Research Council, Cape Town, South Africa

Affiliation Global Health Unit, Norwegian Knowledge Centre for the Health Services, Oslo, Norway

Affiliation Social Welfare Unit, Norwegian Knowledge Centre for the Health Services, Oslo, Norway

Affiliation Uni Research Rokkan Centre, Bergen, Norway

Affiliation Division of Social and Behavioural Sciences, School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa

Affiliation UNDP/UNFPA/ UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction, Department of Reproductive Health and Research, WHO, Geneva, Switzerland

Affiliation School of Social Sciences, Bangor University, Bangor, United Kingdom

Affiliation School of Health & Related Research (ScHARR), University of Sheffield, Sheffield, United Kingdom

Affiliation European Centre for Environment and Human Health, University of Exeter Medical School, Exeter, United Kingdom

Affiliations Department of Health Management and Economics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran, Department of Information, Evidence and Research, Eastern Mediterranean Region, World Health Organization, Cairo, Egypt

  • Simon Lewin, 
  • Claire Glenton, 
  • Heather Munthe-Kaas, 
  • Benedicte Carlsen, 
  • Christopher J. Colvin, 
  • Metin Gülmezoglu, 
  • Jane Noyes, 
  • Andrew Booth, 
  • Ruth Garside, 
  • Arash Rashidian

PLOS

Published: October 27, 2015

  • https://doi.org/10.1371/journal.pmed.1001895
  • Reader Comments

10 Jun 2016: The PLOS Medicine Staff (2016) Correction: Using Qualitative Evidence in Decision Making for Health and Social Interventions: An Approach to Assess Confidence in Findings from Qualitative Evidence Syntheses (GRADE-CERQual). PLOS Medicine 13(6): e1002065. https://doi.org/10.1371/journal.pmed.1002065 View correction

Table 1

Citation: Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gülmezoglu M, et al. (2015) Using Qualitative Evidence in Decision Making for Health and Social Interventions: An Approach to Assess Confidence in Findings from Qualitative Evidence Syntheses (GRADE-CERQual). PLoS Med 12(10): e1001895. https://doi.org/10.1371/journal.pmed.1001895

Copyright: © 2015 Lewin et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

Funding: This work was supported by funding from the Department of Reproductive Health and Research, WHO ( www.who.int/reproductivehealth/about_us/en/ ) and Norad (Norwegian Agency for Development Cooperation: www.norad.no ) to the Norwegian Knowledge Centre for the Health Services. Additional funding for several of the pilot reviews was provided by the Alliance for Health Policy and Systems Research ( www.who.int/alliance-hpsr/en/ ). We also received funding for elements of this work through the Cochrane supported "Methodological Investigation of Cochrane reviews of Complex Interventions" (MICCI) project ( www.cochrane.org ). SL is supported by funding from the South African Medical Research Council ( www.mrc.ac.za ). The funders had no role in study design, data collection and analysis, preparation of the manuscript or the decision to publish.

Competing interests: JN declared receiving a small grant from the Cochrane Collaboration to undertake a component of this this work and received travel expenses to attend meetings from the WHO Alliance for Health Policy and Systems Research and the Norwegian Knowledge Centre for the Health Services. All other authors have declared that no competing interests exist.

Abbreviations: CERQual, Confidence in the Evidence from Reviews of Qualitative research; GRADE, Grading of Recommendations Assessment, Development, and Evaluation; WHO, World Health Organization

Provenance: Not commissioned; externally peer reviewed

Summary Points

  • Qualitative evidence syntheses are increasingly used, but methods to assess how much confidence to place in synthesis findings are poorly developed.
  • The Confidence in the Evidence from Reviews of Qualitative research (CERQual) approach helps assess how much confidence to place in findings from a qualitative evidence synthesis.
  • CERQual’s assessment of confidence for individual review findings from qualitative evidence syntheses is based on four components: the methodological limitations of the qualitative studies contributing to a review finding, the relevance to the review question of the studies contributing to a review finding, the coherence of the review finding, and the adequacy of data supporting a review finding.
  • CERQual provides a transparent method for assessing confidence in qualitative evidence syntheses findings. Like the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach for evidence of effectiveness, CERQual may facilitate the use of qualitative evidence to inform decisions and shape policies.
  • The CERQual approach is being developed by a subgroup of the GRADE Working Group.

Introduction

The systematic use of research evidence to inform health and social policies is becoming more common among governments, international organisations, and other health institutions, and systematic reviews of intervention effectiveness are now used frequently to inform policy decisions. However, evidence of effectiveness is not sufficient to inform decisions on health and social interventions. Decision makers also need information on the feasibility and acceptability of interventions, so as to better understand factors that may influence their implementation [ 1 , 2 ]. Evidence informing the implementation of an intervention within a health or social care system may be obtained from a range of research, including qualitative research. Furthermore, there has been a rapid increase in the number of syntheses of qualitative research being undertaken and in the development of new methods in this area [ 3 – 5 ].

Most systematic reviewers of qualitative research evidence agree that there is a need to distinguish good quality primary studies from those of poor quality and, further, that structured approaches are needed to enhance the consistency and transparency of any approach taken [ 6 ]. While this may give an indication of the trustworthiness of individual studies, and of the review’s evidence base as a whole, it does not inform the decision maker about individual findings within a review, which will be produced through the synthesis of different combinations of findings from studies in the review. Typically, policy makers and other end users use these individual findings ( Box 1 ) to inform decisions about health or social care interventions. We therefore need an approach for assessing how much confidence to place in specific review findings to help users judge how much emphasis they should give to these findings in their decisions.

Box 1. What Is a Review Finding?

The CERQual approach is applied to individual review findings from a qualitative evidence synthesis. Critical to the application and development of CERQual is, therefore, an understanding of what a review finding is. While it may be obvious in some syntheses, for others it will be unclear to which findings (or at which level of synthesis) the CERQual approach should be applied.

For the purposes of CERQual, we define a review finding as an analytic output from a qualitative evidence synthesis that, based on data from primary studies, describes a phenomenon or an aspect of a phenomenon.

By “phenomenon,” we mean the issue that is the focus of the qualitative inquiry. The phenomenon of interest may be a health or social intervention or issue ( S1 Table ).

How review findings are defined and presented depends on many factors, including the review question, the synthesis methods used, the intended purpose or audience of the synthesis, and the richness of the data available. The large number of approaches to qualitative synthesis range, in terms of purpose, from those that aim to identify and describe major themes to those that seek more generalizable, interpretive explanations that can be used for theory building [ 7 ]. Furthermore, many syntheses use both of these approaches or include findings that cannot clearly be defined as either descriptive or interpretive.

An example of a qualitative evidence synthesis that presents different levels of findings is that by Thomas and colleagues on barriers to healthy eating among children. At a more descriptive level, the review includes the finding that children’s food choices are constrained by the availability of food for school dinners and by pressures to choose and eat food quickly. At a more interpretive level, the review attempts to build theory around children’s eating habits. The review authors discuss the finding that children did not see it as their role to be interested in health, preferring to prioritize taste, and that buying healthy food was not a legitimate use of their own pocket money [ 8 ]. Similarly, a recent synthesis on factors affecting the implementation of lay health worker programmes presented a range of more descriptive findings tied to programme acceptability among different stakeholders, lay health worker motivation, and health systems constraints. The review authors organised these findings in a logic model in which they proposed different chains of events in which specific lay health worker programme components led to particular intermediate or long-term outcomes, and in which specific moderators positively or negatively affected this process [ 9 ].

To date, CERQual has been applied to more descriptive-level review findings in syntheses that have been commissioned and used for guideline development for health systems. Given the range of synthesis methods available and the many options for presenting review findings, review authors will need to judge on a case-by-case basis when it is appropriate to apply the CERQual approach. As experience in applying the approach is gained, guidance will be developed on the range of review findings to which CERQual can be applied.

For findings from systematic reviews of the effectiveness of interventions, the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach is now in common use. GRADE allows a consistent and transparent assessment of confidence in evidence of effectiveness for each outcome considered in a systematic review. Key elements in a GRADE assessment, applied to each review outcome, include the risk of bias in the included studies, the relevance or directness of these studies to the review question, the consistency of results from these studies, the precision of the estimate, and the risk of publication bias in the contributing evidence. Such assessments of findings from reviews of effectiveness are a critical component of developing recommendations on health care interventions [ 10 ].

Guideline development groups, and other users of evidence from systematic reviews, are often familiar with the GRADE approach for assessing how much certainty to place in findings from reviews of the effectiveness of interventions. However, GRADE is not appropriate for qualitative evidence. As the demand for syntheses of qualitative evidence increases, so does the need to be able to assess how much confidence to place in findings from these syntheses [ 1 ]. At present, there is no established approach for indicating how confident we can be in the findings from qualitative evidence syntheses, although one previous study attempted to adapt the GRADE approach to qualitative review findings in a mixed-methods synthesis [ 11 ], while another study described a tool developed specifically to assess confidence in findings for meta-aggregative qualitative evidence syntheses [ 12 ]. The lack of an established approach is an important constraint to incorporating qualitative evidence on the acceptability and feasibility of health interventions into tools to support decision making, including the GRADE Evidence to Decision frameworks [ 13 ]. This paper describes a new approach for assessing how much confidence to place in findings from qualitative evidence syntheses.

Development of the Confidence in the Evidence from Reviews of Qualitative research (CERQual) Approach

The CERQual approach was initially developed in 2010 to support a panel that was using qualitative evidence syntheses to develop a new World Health Organization (WHO) guideline [ 14 ]. The technical team for this guideline needed an approach for consistently and transparently assessing and presenting any concerns about the qualitative evidence synthesis findings being used by the panel to inform the guideline.

To develop CERQual, we established a working group of researchers involved in undertaking evidence syntheses. We needed an approach that could be applied to findings from common types of qualitative study designs (e.g., ethnography, case studies) and data (e.g., from interviews, observational), was easy to use, provided a systematic approach to making judgements, allowed these judgements to be reported transparently, and allowed judgements to be understood easily, including by readers without an in-depth understanding of qualitative methods. This work was informed by both the principles of qualitative research and the principles used to develop GRADE for effectiveness [ 15 ]. The guidance in this paper has also been developed in collaboration and agreement with the GRADE Working Group ( www.gradeworkinggroup.org ).

CERQual was developed iteratively. Our first version included two components—methodological limitations and coherence—and was piloted on five syntheses [ 9 , 16 – 19 ]. In 2013, we presented the CERQual approach to researchers, methodologists, and decision makers at a number of events, including the Cochrane Colloquium [ 20 ] and a GRADE Working Group meeting. We then revised the approach, based on feedback from these sessions, to include an additional two components. This gave the approach a total of four components: methodological limitations, relevance, coherence, and adequacy of data. We also identified a further potential component—dissemination bias—as being important but requiring further methodological research before we are able to make a decision on whether this should be included in the CERQual approach ( Box 2 ).

Box 2. Dissemination Bias in Qualitative Research

Dissemination bias (also referred to as publication bias) may be important for qualitative evidence syntheses in situations in which selective dissemination of qualitative studies or the findings of qualitative studies results in systematic distortion of the phenomenon of interest (see S1 Table ). However, empirical evidence on the extent of dissemination bias in qualitative research is very limited—to our knowledge, only one small study on this issue has been conducted [ 21 ]. Further, empirical evidence of the impacts of dissemination bias on qualitative evidence syntheses does not, to our knowledge, exist at present. We also do not have methods available for exploring whether the findings of a synthesis have been distorted systematically by dissemination bias.

A programme of methodological work is currently underway to explore both the extent and nature of dissemination bias in qualitative research and how such bias impacts on qualitative evidence synthesis findings.

To obtain further feedback, we presented the current, four-component version of the approach in 2014 to a group of 25 invited methodologists, researchers, and end users from more than twelve international organizations, with a broad range of experience in qualitative research, the development of GRADE, or guideline development.

Our work is not attempting to produce a rigid checklist to appraise review findings—the risks of applying such critical appraisal checklists unreflectively to qualitative primary studies have been discussed widely in the literature [ 6 , 22 – 24 ]. Rather, CERQual is conceived of as a structured approach to appraisal that requires reviewer judgement and interpretation throughout the approach. Our reasons for developing it are both epistemological and pragmatic. We believe that we should have different degrees of confidence in different findings from a qualitative evidence synthesis because of differences in the evidence that inform each finding. In developing the CERQual components, we have strived to capture core concerns of qualitative researchers such as richness of findings and the explanatory power of any interpretive concepts. We have also tried to respond to the needs of decision makers and other users for research that can usefully inform their policy and practice questions. Without a structured approach, judgements about confidence in a finding are likely to be made anyway by users, but in an ad hoc fashion. Indeed, without a structure for thinking about confidence in findings of qualitative evidence syntheses, these projects risk being further marginalised and underused in informing policy practice. We anticipate that the approach may be refined in the future through further development by the CERQual team and through experience in using the approach. The four CERQual components are described in detail below.

Purpose of CERQual

CERQual aims to transparently assess and describe how much confidence decision makers and other users can place in individual review findings from syntheses of qualitative evidence. We have defined confidence in the evidence as an assessment of the extent to which the review finding is a reasonable representation of the phenomenon of interest. Put another way, it communicates the extent to which the research finding is likely to be substantially different from the phenomenon of interest. By substantially different, we mean different enough that it might change how the finding influences a practical or policy decision about health, social care, or other interventions.

A CERQual assessment provides decision makers with the information they need to decide how much emphasis to give to a particular review finding. Box 1 outlines how a review finding is defined for the purpose of CERQual assessments, Box 3 summarises the purpose of CERQual as well as specifying the issues that CERQual is not intended to address, and S1 Table describes other definitions relevant to CERQual.

Box 3. The Purpose of CERQual and What CERQual Is Not Intended to Address

The CERQual approach transparently assesses and describes how much confidence to place in individual review findings from syntheses of qualitative evidence.

CERQual is not intended for the following:

  • Critical appraisal of the methodological limitations of an individual qualitative study
  • Critical appraisal of the methodological limitations of a qualitative evidence synthesis
  • Appraisal of quantitative or mixed methods data
  • Assessing how much confidence to place in the findings from what are sometimes described as “narrative” or “qualitative” summaries of the effectiveness of an intervention, in systematic reviews of effectiveness in which meta-analysis is not possible
  • Assessing how much confidence to place in the overall findings of a qualitative evidence synthesis. Rather, it focuses on assessing how much confidence to place in individual review findings from qualitative evidence syntheses

Components of CERQual

Four components contribute to an assessment of confidence in the evidence for an individual review finding: methodological limitations, relevance, coherence, and adequacy of data ( Table 1 ). Concerns about any of these components may lower our confidence in a review finding. Each component is discussed in more detail below. The CERQual components reflect similar concerns to the elements included in the GRADE approach for assessing the certainty of evidence on the effectiveness of interventions ( S2 Table ). However, CERQual considers these issues from a qualitative perspective. This paper focuses on situations in which review authors assess how much confidence to place in findings from a review they have undertaken themselves. It may also be possible to apply CERQual to the findings from qualitative evidence syntheses performed by others, and this is discussed later.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pmed.1001895.t001

Methodological Limitations

Definition and explanation..

Methodological limitations are the extent to which there are problems in the design or conduct of the primary studies that contributed evidence to a review finding. When the primary studies underlying a review finding are shown to have important methodological limitations, we are less confident that the review finding reflects the phenomenon of interest.

Operationalizing “methodological limitations”.

When undertaking a qualitative evidence synthesis, review authors should assess the methodological limitations of each primary study included in the synthesis. This should be done using a relevant checklist or tool (for instance, [ 25 – 27 ]). An assessment of methodological limitations should be based on the methodological strengths and weaknesses of each study as there is no hierarchy of study design within qualitative research. Review authors should present and explain these assessments in the review appendices.

When assessing the methodological limitations of the evidence underlying a review finding, review authors must make an overall judgement based on all of the primary studies contributing to the finding. This judgement needs to take into account each study’s relative contribution to the evidence, the types of methodological limitations identified, and how those methodological limitations may impact on the specific finding.

How a primary study was conducted may constitute a methodological limitation for one review finding but not for another finding. For instance, in a study on sexual behaviour among teenagers, a decision to use focus group discussions to collect data may be regarded as a limitation for findings about teenagers’ perceptions of risky or illegal behaviour but may not be regarded as a problem for findings about teenagers’ perceptions of sex education. This is because teenagers may be less willing to talk frankly about the former within a group setting.

Implications when methodological limitations are identified.

When we identify methodological limitations for a particular review finding, this may indicate that primary researchers in this area need to use more appropriate methods or to report the methods used more clearly in future studies.

Relevance is the extent to which the body of evidence from the primary studies supporting a review finding is applicable to the context specified in the review question. This may relate to, for example, the perspective or population researched, the phenomenon of interest or the setting.

Relevance is important in assessing confidence as it indicates to the end user the extent to which the contexts of the primary studies contributing evidence to a finding are aligned with the context specified in the review question. When the contexts of the primary studies underlying a review finding are substantively different from the context of the review question, we are less confident that the review finding reflects the phenomenon of interest.

Operationalizing “relevance”.

For the most part, a review’s inclusion criteria for studies are aligned with the review question, and the included studies are therefore relevant to the review question. However, there are situations in which studies are of reduced relevance. This can be due to differences relating to any of the main domains in a typical review question. This may include differences in the perspective or population, the phenomenon of interest or intervention, the setting, or the time frame. We propose three ways in which relevance could be categorized: indirect relevance, partial relevance, and uncertain relevance.

The evidence supporting a review finding may be indirectly relevant if one of the review domains above, such as perspective or setting, has been substituted with another. For example, the authors of a qualitative evidence synthesis plan to address the question of people’s responses to the swine flu pandemic, but they find no studies exploring this question. However, the review authors identify studies looking at people’s responses to the bird flu pandemic. These studies are included as a likely alternative indicator of people’s responses to the phenomenon of interest (swine flu). Indirect relevance implies that the review authors (or others) have made assumptions about the relevance of the findings to the original review question.

Relevance may be partial when the studies identified for a review address only a subset of the relevant review domains above. For example, in a synthesis exploring how children living in care institutions across Europe experience different models of care, the review authors only identify studies from Norway. Only part of the review question is therefore addressed. Partial relevance implies that the review question is only addressed in a limited way. When this occurs, review authors need to determine which domains in the review question are most important in assessing relevance.

The degree of relevance may be assessed as uncertain when the review authors are unsure about the extent to which the focus of the included studies reflects the phenomenon of interest because of deficiencies in the reported details of the population, intervention, or settings. For example, in a qualitative evidence synthesis exploring cancer patients’ experience with mindfulness-based training, the review authors identify several studies. However, it is unclear whether all of these training programmes include similar approaches to both mindfulness and mindfulness-based training. Uncertain relevance implies that it is difficult to draw conclusions about the relevance of the review finding to the review question.

Our confidence in a review finding may be weakened if the relationship between the contexts of the primary studies and the review question is indirect, partial, or uncertain. Review authors should describe any concerns regarding the extent to which the review finding reflects the context of interest. This will allow end users to better understand the assessment and consider the finding in relation to their own context.

Implications when concerns regarding relevance are identified.

Concerns regarding relevance could indicate a need for more research in different contexts and for better reported primary research. However, they could equally indicate that the phenomenon that is the focus of the review is not prevalent in a given context. For example, a review of parental worries about their children’s health may not uncover European-based studies in which dysentery is mentioned. Rather than indicating gaps in relevant data, this is more likely to be because parents in Europe do not discuss fear of dysentery when asked specifically about their children’s health since it is not a common health problem in most high-income settings.

Qualitative review findings are developed by identifying patterns in the data across the primary studies included in an evidence synthesis. The coherence of the review finding addresses the question of whether the finding is well grounded in data from these primary studies and provides a convincing explanation for the patterns found in these data.

Coherence in the data contributing to a review finding may be contextual, where patterns are found across studies that are similar to each other with respect to population, interventions, or settings; or conceptual, where patterns in the data from the underlying studies can be explained in relation to new or existing theory. Patterns need to be explained and supported through data presented in the primary studies or through hypotheses developed by the primary study authors or the review authors.

Review findings are sometimes challenged by outlying, contrasting, or even disconfirming data from the primary studies that do not support or that directly challenge the main finding. Review authors should look actively for such data that complicate or challenge their main findings [ 28 ] and attempt to explain these variations or exceptions. When there is no convincing explanation for these variations or exceptions, we are less confident that the review finding reflects the phenomenon of interest. Guidance on what constitutes a convincing explanation needs further development.

Operationalizing “coherence”.

Confidence in a review finding may be lower when there is an unexplained lack of coherence. When theories or explanations are used to explain similarities or variations, review authors should specify whether the theory or explanation is internally generated (i.e., the theory or explanation comes from one or several of the studies underlying the review finding), externally sourced (i.e., the theory or explanation is imported from an external source, such as an established concept or theory), or original (i.e., the theory or explanation has been developed by the review authors as part of the synthesis process).

Reasons why it may be difficult to explain the variation in data across individual studies contributing to a finding include that the available data are too thin [ 29 ], outlying or disconfirming cases are not well explored, the review authors do not know the field sufficiently well to generate an explanation, the theory used to inform the review is incomplete or flawed, or the study sampling for the review was limited. Study sampling and the extent to which outlying cases were explored may also be assessed as part of the “methodological limitations” component of CERQual.

Since the patterns that constitute a review finding are created by the review authors, assessing coherence during the synthesis offers an opportunity for “self-check” or reflection. Examining the coherence of the review findings gives review authors an opportunity to reflect on the extent to which the pattern captured in the review finding really is contextually or conceptually coherent. It also gives review authors an opportunity to offer a convincing explanation for the patterns they have found and to note the presence of disconfirming cases.

Implications when concerns about coherence are identified.

Concerns regarding the coherence of a review finding can have several implications: firstly, review authors should consider using the patterns found across primary studies to generate new hypotheses or theory regarding the issue addressed by the finding. Secondly, a lack of coherence in relation to a particular review finding may suggest that more primary research needs to be done in that area and that the review should perhaps be updated once those data are available. Finally, when a review has used a sampling procedure to select studies for inclusion in the review [ 30 ], future updates of the review could reconfigure the sampling to explore the variation found.

Adequacy of Data

Adequacy of data is an overall determination of the degree of richness and quantity of data supporting a review finding.

In assessing adequacy of data, we define “rich data” as data that provide us with sufficient detail to gain an understanding of the phenomenon we are studying—for instance, an understanding of participants’ perceptions and experiences of a given topic. In contrast, thin data do not provide enough detail to develop an understanding of the phenomenon of interest.

In addition to data richness, quantity of data is also important. When a review finding is supported by data from only one or few primary studies, participants, or observations, we are less confident that the review finding reflects the phenomenon of interest. This is because when only a few studies or only small studies exist or when few are sampled, we do not know whether studies undertaken in other settings or groups would have reported similar findings.

Operationalizing “adequacy of data”.

Confidence in a review finding may be lower when there are concerns regarding whether there are adequate amounts of data contributing to a review finding. This could include concerns about the richness of the data or the number of studies, participants, or observations from which the data are drawn.

Review authors need to judge adequacy in relation to the claims made in a specific review finding. There are therefore no fixed rules on what constitutes sufficiently rich data or an adequate quantity of data. When considering whether there are adequate data, review authors may find the principle of saturation of data useful or could consider the extent to which additional data are likely to change the finding [ 31 – 34 ]. Review authors should also look for disconfirming cases. More work is needed on how to apply these strategies in the context of a qualitative evidence synthesis.

Implications when concerns regarding the adequacy of data are identified.

When adequacy of data is not achieved, this may suggest that more primary research needs to be done in relation to the issue discussed in the review finding and that the review should be updated once that research is available. Inadequate data may indicate that the review question was too narrow and that future syntheses should consider a broader scope or include primary studies that examine phenomena that are similar, but not identical, to that under consideration in the synthesis. This, in turn, might have implications for assessment of relevance.

Making an Assessment of Level of Confidence for a Finding

As noted earlier, our confidence in the evidence is an assessment of the extent to which the review finding is a reasonable representation of the phenomenon of interest ( S1 Table ). This assessment is based on the judgements made for each of the four CERQual components. These judgements can be summarised in a CERQual Qualitative Evidence Profile ( Table 2 ). While each CERQual component should initially be assessed individually, review authors also need to look iteratively across the components in order to make a final assessment as components may interact, as noted above, and also to avoid “double downgrading” for the same issue.

thumbnail

https://doi.org/10.1371/journal.pmed.1001895.t002

To indicate our assessment of confidence, we propose four levels: high, moderate, low, or very low. This is a similar approach to that used in the GRADE tool for assessing confidence in the evidence on the effectiveness of interventions [ 35 ]. The levels of confidence for CERQual are defined in Table 3 . We propose that all review findings start off as “high confidence” and are then “rated down” by one or more levels if there are concerns regarding any of the CERQual components. This starting point of “high confidence” reflects a view that each review finding should be seen as a reasonable representation of the phenomenon of interest unless there are factors that would weaken this assumption. Confidence should be assessed for each review finding individually and not for the review as a whole. Future papers will describe in more detail for each CERQual component the circumstances under which confidence in a review finding should be rated down.

thumbnail

https://doi.org/10.1371/journal.pmed.1001895.t003

The assessment of confidence for a review finding is a judgement, and it is therefore particularly important to include an explanation of how this judgement was made. This is discussed further below. Our experience to date in applying CERQual suggests that it may be difficult to achieve “high confidence” for review findings in many areas, as the underlying studies often reveal methodological limitations or there are concerns regarding the adequacy of the data. Those assessing confidence in review findings should specify as far as possible how future studies could address the concerns identified.

Using a “Summary of Qualitative Findings Table” to Summarise the Judgements Made Using CERQual

A summary of qualitative findings table can be used to summarise the key findings from a qualitative evidence synthesis and the confidence in the evidence for each of these findings, as assessed using the CERQual approach. The table should also provide an explanation of the CERQual assessments. An example of a summary of qualitative findings table is provided in Table 4 . There are several advantages to providing a succinct summary of each review finding and an explanation of the CERQual assessment for that finding. Firstly, this may encourage review authors to consider carefully what constitutes a finding in the context of their review and to express these findings clearly ( Box 1 ). Secondly, these tables may facilitate the uptake of qualitative evidence synthesis findings into decision making processes, for example, through evidence-to-decision frameworks [ 13 ]. Thirdly, these tables help to ensure that the judgements underlying CERQual assessments are as transparent as possible.

thumbnail

https://doi.org/10.1371/journal.pmed.1001895.t004

Applying the CERQual Approach

The first version of the CERQual approach has been applied in five reviews [ 9 , 16 – 19 ], three of which were used by WHO as the basis for the development of a global guideline [ 14 ]. The current version of CERQual has been used in one published review [ 36 ] and is currently being used in a further ten reviews, at least half of which are being produced to support WHO guidance. This experience has highlighted a number of factors that review authors should consider when applying CERQual to review findings, and we discuss these factors below.

General considerations.

To date, the application of CERQual to each review finding has been through discussions among at least two review authors. This seems preferable to use by a single reviewer as it offers an opportunity to discuss judgements and may assist review authors in clearly describing the rationale behind each assessment. In addition, multiple reviewers from different disciplinary backgrounds may offer alternative interpretations of confidence—an approach that has also been suggested to enhance data synthesis itself [ 28 ]. The approach is intended to be applied by review authors with experience in both primary qualitative research and qualitative evidence synthesis.

Assessments of each CERQual component are based on judgements by the review authors, and these judgements need to be described clearly and in detail. Providing a justification for each assessment, preferably in a summary of qualitative findings table, is important for the end user, as this shows how the final assessment was reached and increases the transparency of the process. Further, when end users are seeking evidence for a question that differs slightly from the original review question, they are able to see clearly how the assessment of confidence has been made and to adjust their own confidence in the review finding accordingly.

When making judgements using the CERQual approach, review authors need to be aware of the interactions between the four components. At this stage, CERQual gives equal weight to each component, as we view the components as equally important. Further research is needed on whether equal weighting is appropriate and on areas in which there may be overlap between components.

Our experience applying the CERQual approach so far has indicated that it is easiest to begin with an assessment of methodological limitations. Thereafter, it does not seem to be important in which order the other three components are assessed, as the process is iterative.

It is probably most appropriate for review authors to apply the CERQual approach to their own review, given that prior familiarity with the evidence is needed in order to make reasonable judgements concerning methodological limitations, coherence, relevance, and adequacy of data. However, in principle the approach could be applied to review findings from well-conducted reviews by people other than the review authors. Guidance for this will be developed in the future.

Considerations when assessing methodological limitations.

Qualitative research encompasses a wide range of study designs, and there are multiple tools and approaches for assessing the strengths and weaknesses of qualitative studies [ 26 , 27 , 37 – 40 ]. It is currently not possible to recommend a widely agreed upon, simple, and easy to use set of criteria for assessing methodological limitations for the many types of qualitative studies, and this may not be desirable given continued debates regarding different approaches and our desire for the CERQual approach to be used by the range of qualitative researchers involved in evidence synthesis. However, we believe that it is important to try to identify a minimum set of “core domains” for assessing methodological limitations, and this is a key area for future research.

Considerations when assessing relevance.

In the application of CERQual to date, relevance has been assessed by review authors and not by users, such as decision makers and those who support them or consumer groups. There may be instances in which such users would like to use review findings from a relevant synthesis, but their context differs to some extent from that specified in the review question. Transparent reporting of the assessment of relevance by the review authors provides these users with a starting point from which to understand the reasons behind the assessment. However, it may be difficult for users who are not familiar with the primary studies to assess the relevance to their own context.

Considerations when assessing coherence.

With the CERQual assessment in mind, review authors may be tempted to “smooth out” review findings to eliminate variation or to formulate review findings vaguely in order to artificially increase coherence. However, it is not the intention of CERQual to reduce variation within review findings. Identifying both similarities and differences in the primary data, including accounting for disconfirming cases, is an important part of developing review findings. Review authors should not attempt to create findings that appear more coherent through ignoring or minimising important disconfirming cases. As Patton (1999) points out, “Where patterns and trends have been identified, our understanding of those patterns and trends is increased by considering the instances and cases that do not fit within the pattern” ([ 41 ] p. 1191). Moreover, users of qualitative evidence syntheses are often specifically interested in where a review finding is not relevant or applicable, so as to avoid implementing interventions or guidelines that may be inappropriate or not feasible in their specific context.

Considerations when assessing adequacy of data.

While numbers can be important and useful in qualitative research, qualitative analysis generally focuses on text-based data [ 42 ]. The CERQual component of adequacy of data is not intended to encourage the counting of numbers of studies contributing to a review finding, but rather to focus review authors’ attention on where data may be thin or limited in relation to a review finding. In addition, fewer, more conceptually rich studies contributing to a finding may be more powerful than a larger number of thin, descriptive studies.

CERQual provides users of evidence with a systematic and transparent assessment of how much confidence can be placed in individual review findings from syntheses of qualitative evidence. In addition, the use of CERQual could help review authors to consider, analyse, and report review findings in a more useful and usable way. Qualitative evidence syntheses share with primary qualitative data analysis the need for multiple rounds of revisiting the data “as additional questions emerge, new connections are unearthed, and more complex formulations develop along with a deepening understanding of the material” [ 43 ]. The CERQual approach offers review authors a further opportunity for a more structured approach to analysing data. It guides them through a process of examining and appraising the methodological limitations, relevance, coherence, and adequacy of the data contributing to a review finding. The development of CERQual has identified a number of important research questions, and these are summarised in Box 4 .

Box 4. Way Forward and Research Agenda for CERQual

CERQual is a work in progress, and the following steps are planned to further develop the approach:

  • Detailed guidance for review authors and others who wish to apply the approach is currently being developed. This guidance will address each component of CERQual, describe the approach to assessing levels of confidence, outline how to develop summary of qualitative findings tables, and provide worked examples.
  • To date, CERQual has been piloted on evidence syntheses that have used framework [ 44 ] or narrative synthesis approaches [ 45 ] and that have produced largely descriptive findings. The approach now needs to be tested on syntheses that use other methods or that attempt to develop more explanatory findings such as midlevel theory generation, logic models, or conceptual frameworks. Plans for this are currently underway. This testing will help both to assess whether the approach needs to be expanded or adapted to accommodate different types of findings from the wide range of review approaches currently in use [ 46 ] and to develop appropriate guidance for this.
  • Given the range of synthesis methods available and the many options for presenting review findings, review authors will need to judge on a case-by-case basis when it is appropriate to apply CERQual. Developing guidance on this is also an important area for further methodological research.
  • The development of CERQual has identified several priority issues for methodological research, including identifying core domains for the assessment of methodological limitations in primary qualitative studies and exploring how to apply these, investigating the most appropriate order in which to apply the CERQual components to a finding, understanding the role of “dissemination bias” (e.g., whether studies with “novel” findings are more likely to be published) in the context of qualitative research, and exploring the circumstances under which it may be appropriate to increase or “rate up” confidence in a review finding in relation to a CERQual component.
  • Sampling approaches may be employed in qualitative evidence synthesis as part of a priori inclusion criteria (e.g., based on language or study design) or later in the review process after all potentially relevant studies are identified. Studies may be sampled based on, for instance, principles of data saturation or theoretical sampling, or methodological quality [ 30 ]. Experience is needed with these types of reviews in order to establish the degree to which sampling impacts on CERQual assessments.

Some methodologists have critiqued tools that propose explicit criteria for appraising the quality of qualitative research, questioning whether such tools can adequately assess “quality” for this research method [ 22 ]. We take the standpoint, however, that ways of appraising both primary and secondary qualitative research are needed. Such approaches need to be appropriate to, and take into account the diversity of, qualitative methods [ 27 , 37 , 39 ]. As noted above, users of both primary qualitative research findings and qualitative evidence synthesis findings routinely make these judgements when reading and using these types of research. However, the judgements made by these users are implicit, which makes it difficult for others to understand and critique them—an important limitation when findings from such research are then used to inform decisions about health and social policies. CERQual attempts to make assessments of confidence in the evidence more systematic and transparent while accepting that these assessments are judgements that are likely to vary across assessors.

An intended consequence of the CERQual approach is to improve methodological quality and reporting standards in primary qualitative research. For an adequate CERQual assessment to be made, the authors of primary studies need to provide sufficient information about the methods they have used. Wide use of CERQual may thus encourage more thorough reporting of qualitative research methods.

To support the further development of CERQual and facilitate wide involvement of methodologists, researchers, reviewers, and other stakeholders in this process, we have established a GRADE-CERQual Project Group (see: www.cerqual.org ). This is an informal collaboration of people with an interest in how to assess confidence in evidence from qualitative evidence syntheses and is a subgroup of the GRADE Working Group. We would encourage those with an interest in this area to join the group and contribute to the development of the CERQual approach.

Supporting Information

S1 table. key definitions relevant to cerqual..

https://doi.org/10.1371/journal.pmed.1001895.s001

S2 Table. Comparison of the CERQual components and the elements of GRADE.

https://doi.org/10.1371/journal.pmed.1001895.s002

S1 Alternative Language Summary Points.

French translation of the Summary Points.

https://doi.org/10.1371/journal.pmed.1001895.s003

S2 Alternative Language Summary Points.

Italian translation of the Summary Points.

https://doi.org/10.1371/journal.pmed.1001895.s004

S3 Alternative Language Summary Points.

Norwegian translation of the Summary Points.

https://doi.org/10.1371/journal.pmed.1001895.s005

S4 Alternative Language Summary Points.

Spanish translation of the Summary Points.

https://doi.org/10.1371/journal.pmed.1001895.s006

Acknowledgments

Our thanks for their feedback to those who participated in the first GRADE-CERQual Project Group meeting in Barcelona in January 2014: Elie Akl, Zhenggang Bai, Rigmor Berg, Meghan Bohren, Jackie Chandler, Karen Daniels, Bela Ganatra, Andy Oxman, Tomas Pantoja, Kent Ranson, Rebecca Rees, Holger Schünemann, Birte Snilstveit, James Thomas, Hilary Thompson, and Josh Vogel. In addition, we received valuable feedback on this manuscript from Paul Elias Alexander, Meghan Bohren, Philippe du Clos, Joerg Meerpohl, Jasvinder Singh, and Özge Tuncalp and from discussions at several meetings of the GRADE Working Group.

Author Contributions

Wrote the first draft of the manuscript: SL CG HMK. Contributed to the writing of the manuscript: SL CG HMK BC CJC MG JN AB RG AR. Agree with the manuscript’s results and conclusions: SL CG HMK BC CJC MG JN AB RG AR. All authors have read, and confirm that they meet, ICMJE criteria for authorship.

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 6. Petticrew M, Roberts H (2006) Systematic Reviews in the Social Sciences: A Practical Guide. Oxford, UK: Wiley-Blackwell.
  • 8. Thomas J, Sutcliffe K, Harden A, Oakley A, Oliver S, et al. (2003) Children and healthy eating: a systematic review of barriers and facilitators. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London.
  • 14. WHO (2012) Optimizing health worker roles to improve access to key maternal and newborn health interventions through task shifting. Geneva: World Health Organization.
  • 18. Munthe-Kaas HM, Hammerstrøm KT, Kurtze N, Nordlund KR (2013) Effekt av og erfaringer med kontinuitetsfremmende tiltak i barnevernsinstitusjoner. Oslo: Norwegian Knowledge Centre for the Health Services. Available: http://www.kunnskapssenteret.no/publikasjoner/effekt-av-og-erfaringer-med-kontinuitetsfremmende-tiltak-i-barnevernsinstitusjoner
  • 20. Lewin S, Glenton C, Munthe-Kaas H, al. e (2013) Assessing how much certainty to place in findings from qualitative evidence syntheses: the CerQual approach. Oral presentation, 20th Cochrane Colloquium, Quebec 2013. 20th Cochrane Colloquium. Quebec.
  • 25. CASP (2011) Qualitative Appraisal Checklist for Qualitative Research. Critical Appraisal Skills Programme. www.casp-uk.net/#!casp-tools-checklists/c18f8
  • 26. Government Chief Social Researcher's Office (2003) Quality in Qualitative Evaluation: A framework for assessing research evidence. United Kingdom: Cabinet Office.
  • 43. Berkowitz S (1997) Analyzing qualitative data. In: Frechtling J, Sharp L, editors. User-friendly handbook for mixed method evaluations. Arlington, VA: Division of Research, Evaluation and Communication, National Science Foundation.
  • 44. Booth A, Papaioannou D, Sutton A (2012) Systematic Approaches to a Successful Literature Review. London, UK: Sage Publications.
  • 45. Popay J, Roberts H, Sowden A (2006) Guidance on the Conduct of Narrative Synthesis in Systematic Reviews. A Product from the ESRC Methods Programme. Lancaster: Institute of Health Research.
  • Open access
  • Published: 24 September 2018

Are we entering a new era for qualitative research? Using qualitative evidence to support guidance and guideline development by the World Health Organization

  • Simon Lewin   ORCID: orcid.org/0000-0001-7521-9515 1 , 2 , 3 &
  • Claire Glenton 1 , 4  

International Journal for Equity in Health volume  17 , Article number:  126 ( 2018 ) Cite this article

7942 Accesses

41 Citations

95 Altmetric

Metrics details

Qualitative approaches are one of several methodologies utilised within the social sciences. New developments within qualitative methods are widening the opportunities for using qualitative evidence to inform health policy and systems decisions. In this commentary, we discuss how, in our work with the World Health Organization (WHO), we have explored ways of broadening the types of evidence used to develop evidence-informed guidance for health systems.

Health systems decisions are commonly informed by evidence on the effectiveness of health system interventions. However, decision makers and other stakeholders also typically have additional questions, including how different stakeholders value different outcomes, the acceptability and feasibility of different interventions and the impacts of these interventions on equity and human rights. Evidence from qualitative research can help address these questions, and a number of WHO guidelines are now using qualitative evidence in this way. This growing use of qualitative evidence to inform decision making has been facilitated by recent methodological developments, including robust methods for qualitative evidence syntheses and approaches for assessing how much confidence to place in findings from such syntheses. For research evidence to contribute optimally to improving and sustaining the performance of health systems, it needs to be transferred easily between different elements of what has been termed the ‘evidence ecosystem’. This ecosystem includes primary and secondary evidence producers, guidance developers and those implementing and evaluating interventions to strengthen health systems. We argue that most of the elements of an ecosystem for qualitative evidence are now in place – an important milestone that suggests that we are entering a new era for qualitative research. However, a number of challenges and constraints remain. These include how to build stronger links between the communities involved in the different parts of the qualitative evidence ecosystem and the need to strengthen capacity, particularly in low and middle income countries, to produce and utilise qualitative evidence and decision products informed by such evidence. We invite others who want to support the wider use of qualitative evidence in decision processes to look for opportunities in their settings to put this into practice.

The growing use of qualitative evidence to support decisions, and the availability of methods that can help us use this type of evidence in knowledge-to-action cycles [ 1 , 2 ], suggest that we are entering a new era for qualitative research. Qualitative approaches are one of many methodologies utilised within the social sciences. Here we focus here on how new developments in the field of qualitative research are creating important opportunities for using qualitative evidence, including findings from syntheses of qualitative evidence, to inform health policy and systems decisions.

Health systems are complex, creating challenges for decision makers aiming to strengthen these systems and achieve the Sustainable Development Goals [ 3 ]. Most stakeholders agree, however, that decisions about which interventions or policy options to implement should be informed by the best available global and local evidence [ 4 , 5 ]. Within clinical care there is a long history of using evidence-based guidelines to inform decisions. However, evidence-based health systems guidance is a more recent development. Health systems guidance has been defined as “systematically developed statements produced at global or national levels to assist decisions about appropriate options for addressing a health systems challenge in a range of settings and to assist with the implementation of these options and their monitoring and evaluation” [ 6 ]. Health systems guidance can address issues such as options for funding national lay health worker programmes; which digital interventions might effectively support health care delivery; and ways of retaining health care providers in rural areas.

In our work with the World Health Organization (WHO) we have explored ways of broadening the types of evidence that are used to develop health systems guidance [ 7 , 8 ]. While questions of effectiveness are central to health systems decisions, decision makers also want to know more about how different stakeholders value different outcomes, the acceptability and feasibility of different interventions and the impacts of these interventions on equity and human rights [ 6 , 7 , 9 ]. Evidence from qualitative research can play a key role in addressing these considerations [ 8 ]. This is because well-designed qualitative research allows us to explore how people experience and conceptualise the world around them, including health systems and services; and can help us understand how and why these systems succeed or fail. For instance, a recent qualitative study of introducing and implementing humanised childbirth care in a referral hospital in Benin highlighted some of the challenges experienced by midwives and other health care providers in delivering such care, and developed a conceptual model of humanised care in this setting [ 10 ]. This primary study was, in turn, incorporated into a qualitative evidence synthesis (or systematic review of qualitative studies) of factors that influence the provision of childbirth care by skilled birth attendants in low and middle income countries [ 11 ]. This synthesis subsequently informed a WHO statement on skilled health personnel providing care during childbirth [ 12 ]. In addition to the WHO, a range of other organisations involved in producing guidance and health technology assessments are also increasingly using qualitative evidence to answer questions related to, for instance, the acceptability and feasibility of interventions. These organisations include the National Institute of Health and Clinical Excellence (NICE) in the United Kingdom [ 13 , 14 ], the Swedish Public Health Institute and the South African Fetal Alcohol Spectrum Disorders Task Team [ 15 ].

One example of the use of qualitative evidence to inform WHO guidance is the recent set of WHO recommendations on ‘Antenatal care for a positive pregnancy experience’ [ 16 ]. These include recommendations on health systems interventions to improve the quality of antenatal care and women’s use of this care. To ensure that women’s perspectives shaped the development of this guidance, the WHO first commissioned a qualitative evidence synthesis that gathered studies from across the world exploring what women want, need and value in pregnancy [ 17 ]. The WHO used this synthesis when determining the broader aims of the guidance and the key outcomes to be considered when gathering evidence and making recommendations. For instance, the concept of a “positive pregnancy experience” became the core focus of the guidance as a means of ensuring that person-centred health and well-being was prioritized. Also, the outcome “positive pregnancy experience” was included for most guidance questions, ensuring that each intervention was evaluated against this key issue for women. Following the scoping stage of the guidance, the WHO commissioned a second qualitative evidence synthesis to explore factors influencing women’s use of antenatal services [ 18 ]. These findings fed into the guidance process by answering questions about the acceptability and feasibility of the interventions to women and other stakeholders. A number of other recent WHO guidelines have used similar approaches [ 19 , 20 , 21 ].

The growing use of qualitative evidence to inform decision making has been facilitated by a number of key developments in the field, including better standards for reporting primary qualitative studies [ 22 , 23 ]; robust methods for undertaking qualitative evidence syntheses [ 24 ]; databases for rapidly identifying such syntheses within the health field [ 25 , 26 ]; the emergence of GRADE-CERQual - an approach for assessing how much confidence to place in findings from qualitative evidence syntheses [ 27 , 28 ]; and frameworks that facilitate the packaging of different types of evidence to facilitate transparent and systematic assessment by decision makers [ 29 , 30 ]. The challenge now is to mainstream these efforts so that qualitative evidence, ideally from syntheses of primary qualitative studies, is used more widely to develop health systems guidance and clinical guidelines within WHO and within other guideline development organisations. In our experience, one important constraint includes identifying teams to conduct policy-relevant qualitative evidence syntheses, particularly in low and middle income countries. We also need to explore further how to help members of guideline panels and other decision makers engage with different types of evidence and make judgements about these when formulating recommendations. In addition, we need to expand efforts to strengthen capacity in low and middle income countries to undertake health policy and systems relevant primary qualitative research.

For research evidence to contribute optimally to improving and sustaining the performance of health systems, it needs to be transferred easily between different elements of what has been termed the ‘evidence ecosystem’ [ 31 , 32 , 33 ]. This ecosystem includes those producing primary evidence and those synthesising the evidence; people producing evidence-informed decision products such as health systems guidance and clinical practice guidelines; those responsible for implementing evidence-informed options within health systems, including programme managers and decision makers; and those involved in delivering and using health services, including service providers, service users and citizens [ 31 ]. Recent developments within the field of qualitative research, including those described above, mean that we now have most of the elements of an ecosystem for qualitative evidence in place. Evidence from primary qualitative studies is now being gathered in evidence syntheses; the findings of these syntheses are being used in decision products such as guidance and policy briefs [ 34 , 35 ]; and decision products informed by qualitative evidence are being used to guide choices on health system options and, in turn, are informing choices by service providers and users. Finally, these health system strengthening initiatives are being evaluated through new primary qualitative research [ 36 , 37 ]. In addition, we now have a better understanding of how primary qualitative research should be designed to meet the needs of those synthesising research and decision makers.

Having the elements of a qualitative evidence ecosystem in place is an important milestone and suggests that we are entering a new and exciting era within the field of qualitative research. Of course, a number of challenges and constraints remain, as we have noted earlier. Other challenges include how to build stronger links between the communities involved in the different parts of the qualitative evidence ecosystem, including across all sectors relevant to the Sustainable Development Goals [ 3 ], and the need to strengthen capacity across settings and institutions, particularly in low and middle income countries, to produce and utilise qualitative evidence and decision products informed by such evidence. Our experience of working with the WHO has helped us to collaboratively develop methods for using qualitative evidence in guideline and guidance development. We realise, however, that we are also not yet making full use of the potential of qualitative evidence synthesis findings to shape the development of implementation considerations for guidance or to inform guidance contextualisation, adaptation and implementation processes at national and sub-national levels. This is in part because we need to explore both ways of integrating these findings with local evidence [ 4 ], including the knowledge and experience of local stakeholders, and approaches for working with local stakeholders to develop implementation options. Efforts are also needed to strengthen the capacity of local stakeholders to understand and use qualitative evidence. These are areas in which we and others are now doing methodological research.

As we take this work forward, we should also not forget what is perhaps the most important role that qualitative evidence can play in decision-making: representing the views and experiences of stakeholders, including vulnerable and marginalised groups who are often not represented directly. By drawing on the global body of qualitative evidence, qualitative evidence syntheses have the potential, when used together with direct stakeholder engagement, to help ensure that decisions are guided by stakeholders’ views and that these decisions do not widen inequities. As we have noted elsewhere, using qualitative evidence in this way may also contribute to increased transparency and accountability in public decision-making [ 27 ].

Our experiences of working with the WHO have taught us that the best way of learning is doing. We support Health Systems Global’s efforts to promote both primary and secondary social science research on health policies and systems, and to strengthen capacity and collaborations in this area [ 38 ]. However, we need to go further than producing more social science research that is policy relevant and is conducted in ways that address the tensions and complexities involved in commissioning and undertaking research across different settings and groups [ 39 , 40 ]. As a social science research community, we also need to work more closely with policy users and other stakeholders to build capacity for evidence use. We therefore invite others who believe that we need greater recognition of the value of qualitative research, and who want to support the wider use of qualitative evidence in decision processes, to look for opportunities in their settings to put these beliefs into practice.

Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, Robinson N. Lost in knowledge translation: time for a map? J Contin Educ Heal Prof. 2006;26(1):13–24.

Article   Google Scholar  

Straus SE, Kitson A, Harrison MB, Graham ID, Fervers B, Légaré F, Davies B, Edwards N, Majumdar SR. The knowledge-to-action cycle. In: Straus SE, Tetroe J, Graham ID, editors. Knowledge Translation in Health Care. London: Blackwell; 2009.

Chapter   Google Scholar  

United Nations. Transforming our world: the 2030 agenda for sustainable development, vol. 2015. New York, NY: UN; 2015. Available at: https://sustainabledevelopment.un.org/content/documents/21252030%20Agenda%20for%20Sustainable%20Development%20web.pdf

Google Scholar  

Lewin S, Oxman AD, Lavis JN, Fretheim A, Garcia Marti S, Munabi-Babigumira S. SUPPORT tools for evidence-informed policymaking in health 11: Finding and using evidence about local conditions. Health Res Policy Syst. 2009;7(Suppl 1):S11.

Article   PubMed   PubMed Central   Google Scholar  

Oxman AD, Lavis JN, Lewin S, Fretheim A. SUPPORT Tools for evidence-informed health Policymaking (STP) 1: What is evidence-informed policymaking? Health Res Policy Syst. 2009;7(Suppl 1):S1.

Bosch-Capblanch X, Lavis JN, Lewin S, Atun R, Rottingen JA, Droschel D, Beck L, Abalos E, El-Jardali F, Gilson L, et al. Guidance for evidence-informed policies about health systems: rationale for and challenges of guidance development. PLoS Med. 2012;9(3):e1001185.

Glenton C, Lewin S, Gulmezoglu AM. Expanding the evidence base for global recommendations on health systems: strengths and challenges of the OptimizeMNH guidance process. Implement Sci. 2016;11:98.

Glenton C, Lewin S, Norris SL. Using evidence from qualitative research to develop WHO guidelines (Chapter 15). In: World Health Organization. Handbook for Guideline Development. 2nd ed. Geneva: WHO; 2016.

WHO. Handbook for Guideline Development. 2nd ed. Geneva: World Health Organization; 2016.

Fujita N, Perrin XR, Vodounon JA, Gozo MK, Matsumoto Y, Uchida S, Sugiura Y. Humanised care and a change in practice in a hospital in Benin. Midwifery. 2012;28(4):481–8.

Article   PubMed   Google Scholar  

Munabi-Babigumira S, Glenton C, Lewin S, Fretheim A, Nabudere H. Factors that influence the provision of intrapartum and postnatal care by skilled birth attendants in low- and middle-income countries: a qualitative evidence synthesis. Cochrane Database Syst Rev. 2017;11:CD011558.

PubMed   Google Scholar  

WHO. Defining competent maternal and newborn health professionals. Geneva: World Health Organization; 2018. Available at: http://apps.who.int/iris/handle/10665/272817 .

Carroll C. Qualitative evidence synthesis to improve implementation of clinical guidelines. Bmj. 2017;356:j80.

Tan TP, Stokes T, Shaw EJ. Use of qualitative research as evidence in the clinical guideline program of the National Institute for health and clinical excellence. Int J Evid Based Healthc. 2009;7(3):169–72.

Adebiyi BO, Mukumbang FC, Okop KJ, Beytell AM. A modified Delphi study towards developing a guideline to inform policy on fetal alcohol spectrum disorders in South Africa: a study protocol. BMJ Open. 2018;8(4):e019907.

WHO. WHO recommendations on antenatal care for a positive pregnancy experience. Geneva: World Health Organization; 2016.

Downe S, Finlayson K, Tuncalp Ӧ, Metin Gulmezoglu A. What matters to women: a systematic scoping review to identify the processes and outcomes of antenatal care provision that are important to healthy pregnant women. BJOG. 2016;123(4):529–39.

Article   CAS   PubMed   Google Scholar  

Downe S, Finlayson K, Tunçalp Ö, Gülmezoglu AM. Factors that influence the uptake of routine antenatal services by pregnant women: a qualitative evidence synthesis (Protocol). Cochrane Database Syst Rev. 2016;10:CD012392.

WHO. Health worker roles in providing safe abortion care and post-abortion contraception. Geneva: World Health Organization; 2015.

WHO. WHO recommendations: intrapartum care for a positive childbirth experience. Geneva: World Health Organization; 2018.

WHO. Communicating risk in public health emergencies. A WHO guideline for emergency risk communication (ERC) policy and practice. Geneva: World Health Organization; 2018.

O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245–51.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Noyes J, Booth A, Cargo M, Flemming K, Garside R, Hannes K, Harden A, Harris J, Lewin S, Pantoja T, et al. Cochrane qualitative and implementation methods group guidance series-paper 1: introduction. J Clin Epidemiol. 2018;97:35–8.

Lavis JN, Wilson MG, Moat KA, Hammill AC, Boyko JA, Grimshaw JM, Flottorp S. Developing and refining the methods for a ‘one-stop shop’ for research evidence about health systems. Health Res Policy Syst. 2015;13:10.

Rada G, Perez D, Capurro D. Epistemonikos: a free, relational, collaborative, multilingual database of health evidence. Stud Health Technol Inform. 2013;192:486–90.

Lewin S, Booth A, Glenton C, Munthe-Kaas HM, Rashidian A, Wainwright M, Bohren MA, Tunçalp Ö, Colvin CJ, Garside R, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series. Implement Sci. 2018;13(Suppl 1):2.

Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gulmezoglu M, Noyes J, Booth A, Garside R, Rashidian A. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Med. 2015;12(10):e1001895.

Alonso-Coello P, Schünemann HJ, Moberg J, Brignardello-Petersen R, Akl EA, Davoli M, Treweek S, Mustafa RA, Rada G, Rosenbaum S, Morelli A, Guyatt GH, Oxman AD; GRADE Working Group. GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction. BMJ. 2016;353:i2016.

Moberg J, Oxman AD, Rosenbaum S, Schunemann HJ, Guyatt G, Flottorp S, Glenton C, Lewin S, Morelli A, Rada G, et al. The GRADE evidence to decision (EtD) framework for health system and public health decisions. Health Res Policy Syst. 2018;16(1):45.

Brandt L, Agoritsas T, Guyatt GH, van de Velde S, Kiuijpers T, Elliot J, Mavergames C, Leng G, MacDonald H, Kunnamo I, et al. A trustworthy, efficient and integrated evidence ecosystem. Forthcoming. 2018;

Elliott JH, Turner T, Clavisi O, Thomas J, Higgins JP, Mavergames C, Gruen RL. Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap. PLoS Med. 2014;11(2):e1001603.

Shepherd JP. How to achieve more effective services: the evidence ecosystem. Cardiff, UK: What Works Network/Cardiff University; 2014. Available at: http://www.scie-socialcareonline.org.uk/how-to-achieve-more-effective-services-the-evidence-ecosystem/r/a11G0000006z7vXIAQ

Lavis JN, Permanand G, Oxman AD, Lewin S, Fretheim A. SUPPORT Tools for evidence-informed health Policymaking (STP) 13: Preparing and using policy briefs to support evidence-informed policymaking. Health Res Policy Syst. 2009;7(Suppl 1):S13.

Moat KA, Lavis JN, Clancy SJ, El-Jardali F, Pantoja T, Knowledge Translation Platform Evaluation study team. Evidence briefs and deliberative dialogues: perceptions and intentions to act on what was learnt. Bull World Health Organ. 2014;92(1):20–8.

Pitchforth E, van Teijlingen E, Graham W, Dixon-Woods M, Chowdhury M. Getting women to hospital is not enough: a qualitative study of access to emergency obstetric care in Bangladesh. Qual Saf Health Care. 2006;15(3):214–9.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Zembe-Mkabile WZ, Jackson D, Sanders D, Besada D, Daniels K, Zamasiya T, Doherty T. The ‘community’ in community case management of childhood illnesses in Malawi. Glob Health Action. 2016;9:29177.

Social science approaches for research and engagement in health policy & systems (SHaPeS) thematic working group of Health Systems Global, Regional Network for Equity in Health in East and Southern Africa, Emerging Voices for Global Health, Daniels K, Loewenson R, George A, Howard N, Koleva G, Lewin S, Marchal B, et al. Fair publication of qualitative research in health systems: a call by health policy and systems researchers. Int J Equity Health. 2016;15:98.

Doherty T, Lewin S, Kinney M, Sanders D, Mathews C, Daviaud E, Goga A, Bhana A, Besada D, Vanleeuw L, et al. Addressing the tensions and complexities involved in commissioning and undertaking implementation research in low- and middle-income countries. BMJ Glob Health. 2018;3:e000741.

de Gruchy J, Lewin S. Ethics that exclude: the role of ethics committees in lesbian and gay health research in South Africa. Am J Public Health. 2001;91(6):865–8.

Download references

Acknowledgements

We would like to thank Karen Daniels, Ana Lorena Ruano, Kerry Scott and Stephanie Topp for their helpful and insightful comments on earlier versions of this commentary.

No funding was received for writing this commentary. Both authors have received funding from the Alliance for Health Policy and Systems Research, the Brocher Foundation, Cochrane, the Norwegian Agency for Development Cooperation (Norad), the Research Council of Norway and the WHO in relation to the ideas and work described in this commentary. SL receives additional funding from the South African Medical Research Council.

Availability of data and materials

Not applicable as the manuscript does not contain any data.

Author information

Authors and affiliations.

Norwegian Institute of Public Health, PO Box 222 Skøyen, 0213, Oslo, Norway

Simon Lewin & Claire Glenton

Health Systems Research Unit, South African Medical Research Council, Cape Town, South Africa

Simon Lewin

Cochrane EPOC Group, Norwegian Institute of Public Health, Oslo, Norway

Cochrane Norway, Norwegian Institute of Public Health, PO Box 222 Skøyen, 0213, Oslo, Norway

Claire Glenton

You can also search for this author in PubMed   Google Scholar

Contributions

SL and CG jointly wrote this commentary. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Simon Lewin .

Ethics declarations

Authors’ information.

SL works as a health systems researcher and has a background in medicine and in sociology as applied to health. CG works as a health systems researcher and has a background in anthropology and the social sciences as applied to health and health care.

Ethics approval and consent to participate

Not applicable as this is a commentary and not an empirical study.

Consent for publication

Not applicable as the manuscript does not contain data from any individual person.

Competing interests

Simon Lewin is the Joint Coordinating Editor for the Cochrane Effective Practice and Organization of Care (EPOC) Group and a Coordinator of the GRADE-CERQual Project Group. Claire Glenton is the Director of Cochrane Norway, an Editor for the Cochrane Effective Practice and Organization of Care (EPOC) Group and a Coordinator of the GRADE-CERQual Project Group. Both Simon and Claire have worked closely with the WHO on developing WHO guidelines and guidance.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Lewin, S., Glenton, C. Are we entering a new era for qualitative research? Using qualitative evidence to support guidance and guideline development by the World Health Organization. Int J Equity Health 17 , 126 (2018). https://doi.org/10.1186/s12939-018-0841-x

Download citation

Received : 03 August 2018

Accepted : 10 August 2018

Published : 24 September 2018

DOI : https://doi.org/10.1186/s12939-018-0841-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Qualitative evidence synthesis
  • Social science approach
  • Evidence ecosystem
  • Health systems
  • Systematic review
  • Capacity strengthening
  • Decision making
  • Policy making

International Journal for Equity in Health

ISSN: 1475-9276

explain the importance of qualitative research findings in decision making

Qualitative Research: An Overview

  • First Online: 24 April 2019

Cite this chapter

Book cover

  • Yanto Chandra 3 &
  • Liang Shang 4  

3685 Accesses

5 Citations

Qualitative research is one of the most commonly used types of research and methodology in the social sciences. Unfortunately, qualitative research is commonly misunderstood. In this chapter, we describe and explain the misconceptions surrounding qualitative research enterprise, why researchers need to care about when using qualitative research, the characteristics of qualitative research, and review the paradigms in qualitative research.

  • Qualitative research
  • Gioia approach
  • Yin-Eisenhardt approach
  • Langley approach
  • Interpretivism

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Qualitative research is defined as the practice used to study things –– individuals and organizations and their reasons, opinions, and motivations, beliefs in their natural settings. It involves an observer (a researcher) who is located in the field , who transforms the world into a series of representations such as fieldnotes, interviews, conversations, photographs, recordings and memos (Denzin and Lincoln 2011 ). Many researchers employ qualitative research for exploratory purpose while others use it for ‘quasi’ theory testing approach. Qualitative research is a broad umbrella of research methodologies that encompasses grounded theory (Glaser and Strauss 2017 ; Strauss and Corbin 1990 ), case study (Flyvbjerg 2006 ; Yin 2003 ), phenomenology (Sanders 1982 ), discourse analysis (Fairclough 2003 ; Wodak and Meyer 2009 ), ethnography (Geertz 1973 ; Garfinkel 1967 ), and netnography (Kozinets 2002 ), among others. Qualitative research is often synonymous with ‘case study research’ because ‘case study’ primarily uses (but not always) qualitative data.

The quality standards or evaluation criteria of qualitative research comprises: (1) credibility (that a researcher can provide confidence in his/her findings), (2) transferability (that results are more plausible when transported to a highly similar contexts), (3) dependability (that errors have been minimized, proper documentation is provided), and (4) confirmability (that conclusions are internally consistent and supported by data) (see Lincoln and Guba 1985 ).

We classify research into a continuum of theory building — >   theory elaboration — >   theory testing . Theory building is also known as theory exploration. Theory elaboration refers to the use of qualitative data and a method to seek “confirmation” of the relationships among variables or processes or mechanisms of a social reality (Bartunek and Rynes 2015 ).

In the context of qualitative research, theory/ies usually refer(s) to conceptual model(s) or framework(s) that explain the relationships among a set of variables or processes that explain a social phenomenon. Theory or theories could also refer to general ideas or frameworks (e.g., institutional theory, emancipation theory, or identity theory) that are reviewed as background knowledge prior to the commencement of a qualitative research project.

For example, a qualitative research can ask the following question: “How can institutional change succeed in social contexts that are dominated by organized crime?” (Vaccaro and Palazzo 2015 ).

We have witnessed numerous cases in which committed positivist methodologists were asked to review qualitative papers, and they used a survey approach to assess the quality of an interpretivist work. This reviewers’ fallacy is dangerous and hampers the progress of a field of research. Editors must be cognizant of such fallacy and avoid it.

A social enterprises (SE) is an organization that combines social welfare and commercial logics (Doherty et al. 2014 ), or that uses business principles to address social problems (Mair and Marti 2006 ); thus, qualitative research that reports that ‘social impact’ is important for SEs is too descriptive and, arguably, tautological. It is not uncommon to see authors submitting purely descriptive papers to scholarly journals.

Some qualitative researchers have conducted qualitative work using primarily a checklist (ticking the boxes) to show the presence or absence of variables, as if it were a survey-based study. This is utterly inappropriate for a qualitative work. A qualitative work needs to show the richness and depth of qualitative findings. Nevertheless, it is acceptable to use such checklists as supplementary data if a study involves too many informants or variables of interest, or the data is too complex due to its longitudinal nature (e.g., a study that involves 15 cases observed and involving 59 interviews with 33 informants within a 7-year fieldwork used an excel sheet to tabulate the number of events that occurred as supplementary data to the main analysis; see Chandra 2017a , b ).

As mentioned earlier, there are different types of qualitative research. Thus, a qualitative researcher will customize the data collection process to fit the type of research being conducted. For example, for researchers using ethnography, the primary data will be in the form of photos and/or videos and interviews; for those using netnography, the primary data will be internet-based textual data. Interview data is perhaps the most common type of data used across all types of qualitative research designs and is often synonymous with qualitative research.

The purpose of qualitative research is to provide an explanation , not merely a description and certainly not a prediction (which is the realm of quantitative research). However, description is needed to illustrate qualitative data collected, and usually researchers describe their qualitative data by inserting a number of important “informant quotes” in the body of a qualitative research report.

We advise qualitative researchers to adhere to one approach to avoid any epistemological and ontological mismatch that may arise among different camps in qualitative research. For instance, mixing a positivist with a constructivist approach in qualitative research frequently leads to unnecessary criticism and even rejection from journal editors and reviewers; it shows a lack of methodological competence or awareness of one’s epistemological position.

Analytical generalization is not generalization to some defined population that has been sampled, but to a “theory” of the phenomenon being studied, a theory that may have much wider applicability than the particular case studied (Yin 2003 ).

There are different types of contributions. Typically, a researcher is expected to clearly articulate the theoretical contributions for a qualitative work submitted to a scholarly journal. Other types of contributions are practical (or managerial ), common for business/management journals, and policy , common for policy related journals.

There is ongoing debate on whether a template for qualitative research is desirable or necessary, with one camp of scholars (the pluralistic critical realists) that advocates a pluralistic approaches to qualitative research (“qualitative research should not follow a particular template or be prescriptive in its process”) and the other camps are advocating for some form of consensus via the use of particular approaches (e.g., the Eisenhardt or Gioia Approach, etc.). However, as shown in Table 1.1 , even the pluralistic critical realism in itself is a template and advocates an alternative form of consensus through the use of diverse and pluralistic approaches in doing qualitative research.

Alvesson, M., & Kärreman, D. (2007). Constructing mystery: Empirical matters in theory development. Academy of Management Review, 32 (4), 1265–1281.

Article   Google Scholar  

Bartunek, J. M., & Rynes, S. L. (2015). Qualitative research: It just keeps getting more interesting! In Handbook of qualitative organizational research (pp. 41–55). New York: Routledge.

Google Scholar  

Brinkmann, S. (2018). Philosophies of qualitative research . New York: Oxford University Press.

Bucher, S., & Langley, A. (2016). The interplay of reflective and experimental spaces in interrupting and reorienting routine dynamics. Organization Science, 27 (3), 594–613.

Chandra, Y. (2017a). A time-based process model of international entrepreneurial opportunity evaluation. Journal of International Business Studies, 48 (4), 423–451.

Chandra, Y. (2017b). Social entrepreneurship as emancipatory work. Journal of Business Venturing, 32 (6), 657–673.

Corley, K. G., & Gioia, D. A. (2004). Identity ambiguity and change in the wake of a corporate spin-off. Administrative Science Quarterly, 49 (2), 173–208.

Cornelissen, J. P. (2017). Preserving theoretical divergence in management research: Why the explanatory potential of qualitative research should be harnessed rather than suppressed. Journal of Management Studies, 54 (3), 368–383.

Denis, J. L., Lamothe, L., & Langley, A. (2001). The dynamics of collective leadership and strategic change in pluralistic organizations. Academy of Management Journal, 44 (4), 809–837.

Denzin, N. K., & Lincoln, Y. S. (2011). Introduction. In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (4th ed.). Thousand Oaks: Sage.

Doherty, B., Haugh, H., & Lyon, F. (2014). Social enterprises as hybrid organizations: A review and research agenda. International Journal of Management Reviews, 16 (4), 417–436.

Dubé, L., & Paré, G. (2003). Rigor in information systems positivist case research: Current practices, trends, and recommendations. MIS Quarterly, 27 (4), 597–636.

Easton, G. (2010). Critical realism in case study research. Industrial Marketing Management, 39 (1), 118–128.

Eisenhardt, K. M. (1989a). Building theories from case study research. Academy of Management Review, 14 (4), 532–550.

Eisenhardt, K. M. (1989b). Making fast strategic decisions in high-velocity environments. Academy of Management Journal, 32 (3), 543–576.

Fairclough, N. (2003). Analysing discourse: Textual analysis for social research . Abingdon: Routledge.

Book   Google Scholar  

Flyvbjerg, B. (2006). Five misunderstandings about case-study research. Qualitative Inquiry, 12 (2), 219–245.

Friese, S. (2011). Using ATLAS.ti for analyzing the financial crisis data [67 paragraphs]. Forum Qualitative Sozialforschung/Forum: Qualitative Social Research, 12 (1), Art. 39. http://nbn-resolving.de/urn:nbn:de:0114-fqs1101397

Garfinkel, H. (1967). Studies in ethnomethodology . Malden: Blackwell Publishers.

Geertz, C. (1973). Interpretation of cultures . New York: Basic Books.

Gehman, J., Glaser, V. L., Eisenhardt, K. M., Gioia, D., Langley, A., & Corley, K. G. (2017). Finding theory–method fit: A comparison of three qualitative approaches to theory building. Journal of Management Inquiry, 27 , 284–300. in press.

Gioia, D. A. (1992). Pinto fires and personal ethics: A script analysis of missed opportunities. Journal of Business Ethics, 11 (5–6), 379–389.

Gioia, D. A. (2007). Individual epistemology – Interpretive wisdom. In E. H. Kessler & J. R. Bailey (Eds.), The handbook of organizational and managerial wisdom (pp. 277–294). Thousand Oaks: Sage.

Chapter   Google Scholar  

Gioia, D. (2019). If I had a magic wand: Reflections on developing a systematic approach to qualitative research. In B. Boyd, R. Crook, J. Le, & A. Smith (Eds.), Research methodology in strategy and management . https://books.emeraldinsight.com/page/detail/Standing-on-the-Shoulders-of-Giants/?k=9781787563360

Gioia, D. A., & Chittipeddi, K. (1991). Sensemaking and sensegiving in strategic change initiation. Strategic Management Journal, 12 (6), 433–448.

Gioia, D. A., Price, K. N., Hamilton, A. L., & Thomas, J. B. (2010). Forging an identity: An insider-outsider study of processes involved in the formation of organizational identity. Administrative Science Quarterly, 55 (1), 1–46.

Gioia, D. A., Corley, K. G., & Hamilton, A. L. (2013). Seeking qualitative rigor in inductive research: Notes on the Gioia methodology. Organizational Research Methods, 16 (1), 15–31.

Glaser, B. G., & Strauss, A. L. (2017). Discovery of grounded theory: Strategies for qualitative research . New York: Routledge.

Graebner, M. E., & Eisenhardt, K. M. (2004). The seller’s side of the story: Acquisition as courtship and governance as syndicate in entrepreneurial firms. Administrative Science Quarterly, 49 (3), 366–403.

Grayson, K., & Shulman, D. (2000). Indexicality and the verification function of irreplaceable possessions: A semiotic analysis. Journal of Consumer Research, 27 (1), 17–30.

Hunt, S. D. (1991). Positivism and paradigm dominance in consumer research: Toward critical pluralism and rapprochement. Journal of Consumer Research, 18 (1), 32–44.

King, G., Keohane, R. O., & Verba, S. (1994). Designing social inquiry: Scientific inference in qualitative research . Princeton: Princeton University Press.

Kozinets, R. V. (2002). The field behind the screen: Using netnography for marketing research in online communities. Journal of Marketing Research, 39 (1), 61–72.

Langley, A. (1988). The roles of formal strategic planning. Long Range Planning, 21 (3), 40–50.

Langley, A., & Abdallah, C. (2011). Templates and turns in qualitative studies of strategy and management. In Building methodological bridges (pp. 201–235). Bingley: Emerald Group Publishing Limited.

Langley, A., Golden-Biddle, K., Reay, T., Denis, J. L., Hébert, Y., Lamothe, L., & Gervais, J. (2012). Identity struggles in merging organizations: Renegotiating the sameness–difference dialectic. The Journal of Applied Behavioral Science, 48 (2), 135–167.

Langley, A. N. N., Smallman, C., Tsoukas, H., & Van de Ven, A. H. (2013). Process studies of change in organization and management: Unveiling temporality, activity, and flow. Academy of Management Journal, 56 (1), 1–13.

Lin, A. C. (1998). Bridging positivist and interpretivist approaches to qualitative methods. Policy Studies Journal, 26 (1), 162–180.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry . Beverly Hills: Sage.

Mair, J., & Marti, I. (2006). Social entrepreneurship research: A source of explanation, prediction, and delight. Journal of World Business, 41 (1), 36–44.

Nag, R., Corley, K. G., & Gioia, D. A. (2007). The intersection of organizational identity, knowledge, and practice: Attempting strategic change via knowledge grafting. Academy of Management Journal, 50 (4), 821–847.

Ozcan, P., & Eisenhardt, K. M. (2009). Origin of alliance portfolios: Entrepreneurs, network strategies, and firm performance. Academy of Management Journal, 52 (2), 246–279.

Prasad, P. (2018). Crafting qualitative research: Beyond positivist traditions . New York: Taylor & Francis.

Pratt, M. G. (2009). From the editors: For the lack of a boilerplate: Tips on writing up (and reviewing) qualitative research. Academy of Management Journal, 52 (5), 856–862.

Ramoglou, S., & Tsang, E. W. (2016). A realist perspective of entrepreneurship: Opportunities as propensities. Academy of Management Review, 41 (3), 410–434.

Sanders, P. (1982). Phenomenology: A new way of viewing organizational research. Academy of Management Review, 7 (3), 353–360.

Sobh, R., & Perry, C. (2006). Research design and data analysis in realism research. European Journal of Marketing, 40 (11/12), 1194–1209.

Stake, R. E. (2010). Qualitative research: Studying how things work . New York: Guilford Press.

Strauss, A., & Corbin, J. M. (1990). Basics of qualitative research: Grounded theory procedures and techniques . Thousand Oaks: Sage.

Vaccaro, A., & Palazzo, G. (2015). Values against violence: Institutional change in societies dominated by organized crime. Academy of Management Journal, 58 (4), 1075–1101.

Weick, K. E. (1989). Theory construction as disciplined imagination. Academy of Management Review, 14 (4), 516–531.

Welch, C. L., Welch, D. E., & Hewerdine, L. (2008). Gender and export behaviour: Evidence from women-owned enterprises. Journal of Business Ethics, 83 (1), 113–126.

Welch, C., Piekkari, R., Plakoyiannaki, E., & Paavilainen-Mäntymäki, E. (2011). Theorising from case studies: Towards a pluralist future for international business research. Journal of International Business Studies, 42 (5), 740–762.

Wodak, R., & Meyer, M. (Eds.). (2009). Methods for critical discourse analysis . London: Sage.

Yin, R. K. (1981). Life histories of innovations: How new practices become routinized. Public Administration Review, 41 , 21–28.

Yin, R. (2003). Case study research: Design and methods . Thousand Oaks: Sage.

Young, R. A., & Collin, A. (2004). Introduction: Constructivism and social constructionism in the career field. Journal of Vocational Behavior, 64 (3), 373–388.

Download references

Author information

Authors and affiliations.

The Hong Kong Polytechnic University, Hong Kong, Kowloon, Hong Kong

Yanto Chandra

City University of Hong Kong, Hong Kong, Kowloon, Hong Kong

Liang Shang

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this chapter

Chandra, Y., Shang, L. (2019). Qualitative Research: An Overview. In: Qualitative Research Using R: A Systematic Approach. Springer, Singapore. https://doi.org/10.1007/978-981-13-3170-1_1

Download citation

DOI : https://doi.org/10.1007/978-981-13-3170-1_1

Published : 24 April 2019

Publisher Name : Springer, Singapore

Print ISBN : 978-981-13-3169-5

Online ISBN : 978-981-13-3170-1

eBook Packages : Social Sciences Social Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

What counts? The critical role of qualitative data in teachers' decision making

Affiliation.

  • 1 University of California, Los Angeles, 612 Pacific St., Apt. 4, Santa Monica, CA 90405, United States. Electronic address: [email protected].
  • PMID: 35066328
  • DOI: 10.1016/j.evalprogplan.2021.102046

The push for data-based decision making in schools has largely centered on the use of quantitative data to inform technical-rational processes of teachers' decision making. Previous attention to teachers' reliance on qualitative data - particularly unsystematically collected qualitative data - tends to focus on their use of intuition and is often characterized as a counter to evidence based inquiry. Limited research, however, has been conducted to understand how teachers actually apply available data within their classrooms, the factors that shape teachers' decision making, or what they consider credible in assessing their students' progress and achievement. In this collective case study, 15 teachers from three high schools discuss how they exercise professional judgment and make instructional decisions based on qualitative evidence. It takes an intentionally grounded approach to exploring the many data points that teachers draw upon as they face decision moments in their daily practice. In interviews and observations over the course of one school year, teachers describe various types of qualitative data that shed light on students' experiences as they undertake processes of learning. As these teachers glean bits and pieces of systematically and unsystematically collected qualitative data, including informal, undocumented data through conversations and observation, these data inspire reflective questioning and hypotheses about their instructional practice. While student progress should not be wholly assessed based on qualitative data, the findings show that we must acknowledge the inevitable and critical role these data play in guiding teachers' actions and informing their professional judgment. The persistent integration of qualitative data - though sometimes pointed to as a threat to rational decision-making processes - instead confirms a reliance upon them in guiding classroom instruction and a need to ensure their appropriate use.

Keywords: Credible evidence; Data utilization; Data-based decision making; Educational evaluation.

Copyright © 2022 The Authors. Published by Elsevier Ltd.. All rights reserved.

  • Decision Making
  • Program Evaluation
  • School Teachers*

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of springeropen

What is Qualitative in Qualitative Research

Patrik aspers.

1 Department of Sociology, Uppsala University, Uppsala, Sweden

2 Seminar for Sociology, Universität St. Gallen, St. Gallen, Switzerland

3 Department of Media and Social Sciences, University of Stavanger, Stavanger, Norway

What is qualitative research? If we look for a precise definition of qualitative research, and specifically for one that addresses its distinctive feature of being “qualitative,” the literature is meager. In this article we systematically search, identify and analyze a sample of 89 sources using or attempting to define the term “qualitative.” Then, drawing on ideas we find scattered across existing work, and based on Becker’s classic study of marijuana consumption, we formulate and illustrate a definition that tries to capture its core elements. We define qualitative research as an iterative process in which improved understanding to the scientific community is achieved by making new significant distinctions resulting from getting closer to the phenomenon studied. This formulation is developed as a tool to help improve research designs while stressing that a qualitative dimension is present in quantitative work as well. Additionally, it can facilitate teaching, communication between researchers, diminish the gap between qualitative and quantitative researchers, help to address critiques of qualitative methods, and be used as a standard of evaluation of qualitative research.

If we assume that there is something called qualitative research, what exactly is this qualitative feature? And how could we evaluate qualitative research as good or not? Is it fundamentally different from quantitative research? In practice, most active qualitative researchers working with empirical material intuitively know what is involved in doing qualitative research, yet perhaps surprisingly, a clear definition addressing its key feature is still missing.

To address the question of what is qualitative we turn to the accounts of “qualitative research” in textbooks and also in empirical work. In his classic, explorative, interview study of deviance Howard Becker ( 1963 ) asks ‘How does one become a marijuana user?’ In contrast to pre-dispositional and psychological-individualistic theories of deviant behavior, Becker’s inherently social explanation contends that becoming a user of this substance is the result of a three-phase sequential learning process. First, potential users need to learn how to smoke it properly to produce the “correct” effects. If not, they are likely to stop experimenting with it. Second, they need to discover the effects associated with it; in other words, to get “high,” individuals not only have to experience what the drug does, but also to become aware that those sensations are related to using it. Third, they require learning to savor the feelings related to its consumption – to develop an acquired taste. Becker, who played music himself, gets close to the phenomenon by observing, taking part, and by talking to people consuming the drug: “half of the fifty interviews were conducted with musicians, the other half covered a wide range of people, including laborers, machinists, and people in the professions” (Becker 1963 :56).

Another central aspect derived through the common-to-all-research interplay between induction and deduction (Becker 2017 ), is that during the course of his research Becker adds scientifically meaningful new distinctions in the form of three phases—distinctions, or findings if you will, that strongly affect the course of his research: its focus, the material that he collects, and which eventually impact his findings. Each phase typically unfolds through social interaction, and often with input from experienced users in “a sequence of social experiences during which the person acquires a conception of the meaning of the behavior, and perceptions and judgments of objects and situations, all of which make the activity possible and desirable” (Becker 1963 :235). In this study the increased understanding of smoking dope is a result of a combination of the meaning of the actors, and the conceptual distinctions that Becker introduces based on the views expressed by his respondents. Understanding is the result of research and is due to an iterative process in which data, concepts and evidence are connected with one another (Becker 2017 ).

Indeed, there are many definitions of qualitative research, but if we look for a definition that addresses its distinctive feature of being “qualitative,” the literature across the broad field of social science is meager. The main reason behind this article lies in the paradox, which, to put it bluntly, is that researchers act as if they know what it is, but they cannot formulate a coherent definition. Sociologists and others will of course continue to conduct good studies that show the relevance and value of qualitative research addressing scientific and practical problems in society. However, our paper is grounded in the idea that providing a clear definition will help us improve the work that we do. Among researchers who practice qualitative research there is clearly much knowledge. We suggest that a definition makes this knowledge more explicit. If the first rationale for writing this paper refers to the “internal” aim of improving qualitative research, the second refers to the increased “external” pressure that especially many qualitative researchers feel; pressure that comes both from society as well as from other scientific approaches. There is a strong core in qualitative research, and leading researchers tend to agree on what it is and how it is done. Our critique is not directed at the practice of qualitative research, but we do claim that the type of systematic work we do has not yet been done, and that it is useful to improve the field and its status in relation to quantitative research.

The literature on the “internal” aim of improving, or at least clarifying qualitative research is large, and we do not claim to be the first to notice the vagueness of the term “qualitative” (Strauss and Corbin 1998 ). Also, others have noted that there is no single definition of it (Long and Godfrey 2004 :182), that there are many different views on qualitative research (Denzin and Lincoln 2003 :11; Jovanović 2011 :3), and that more generally, we need to define its meaning (Best 2004 :54). Strauss and Corbin ( 1998 ), for example, as well as Nelson et al. (1992:2 cited in Denzin and Lincoln 2003 :11), and Flick ( 2007 :ix–x), have recognized that the term is problematic: “Actually, the term ‘qualitative research’ is confusing because it can mean different things to different people” (Strauss and Corbin 1998 :10–11). Hammersley has discussed the possibility of addressing the problem, but states that “the task of providing an account of the distinctive features of qualitative research is far from straightforward” ( 2013 :2). This confusion, as he has recently further argued (Hammersley 2018 ), is also salient in relation to ethnography where different philosophical and methodological approaches lead to a lack of agreement about what it means.

Others (e.g. Hammersley 2018 ; Fine and Hancock 2017 ) have also identified the treat to qualitative research that comes from external forces, seen from the point of view of “qualitative research.” This threat can be further divided into that which comes from inside academia, such as the critique voiced by “quantitative research” and outside of academia, including, for example, New Public Management. Hammersley ( 2018 ), zooming in on one type of qualitative research, ethnography, has argued that it is under treat. Similarly to Fine ( 2003 ), and before him Gans ( 1999 ), he writes that ethnography’ has acquired a range of meanings, and comes in many different versions, these often reflecting sharply divergent epistemological orientations. And already more than twenty years ago while reviewing Denzin and Lincoln’ s Handbook of Qualitative Methods Fine argued:

While this increasing centrality [of qualitative research] might lead one to believe that consensual standards have developed, this belief would be misleading. As the methodology becomes more widely accepted, querulous challengers have raised fundamental questions that collectively have undercut the traditional models of how qualitative research is to be fashioned and presented (1995:417).

According to Hammersley, there are today “serious treats to the practice of ethnographic work, on almost any definition” ( 2018 :1). He lists five external treats: (1) that social research must be accountable and able to show its impact on society; (2) the current emphasis on “big data” and the emphasis on quantitative data and evidence; (3) the labor market pressure in academia that leaves less time for fieldwork (see also Fine and Hancock 2017 ); (4) problems of access to fields; and (5) the increased ethical scrutiny of projects, to which ethnography is particularly exposed. Hammersley discusses some more or less insufficient existing definitions of ethnography.

The current situation, as Hammersley and others note—and in relation not only to ethnography but also qualitative research in general, and as our empirical study shows—is not just unsatisfactory, it may even be harmful for the entire field of qualitative research, and does not help social science at large. We suggest that the lack of clarity of qualitative research is a real problem that must be addressed.

Towards a Definition of Qualitative Research

Seen in an historical light, what is today called qualitative, or sometimes ethnographic, interpretative research – or a number of other terms – has more or less always existed. At the time the founders of sociology – Simmel, Weber, Durkheim and, before them, Marx – were writing, and during the era of the Methodenstreit (“dispute about methods”) in which the German historical school emphasized scientific methods (cf. Swedberg 1990 ), we can at least speak of qualitative forerunners.

Perhaps the most extended discussion of what later became known as qualitative methods in a classic work is Bronisław Malinowski’s ( 1922 ) Argonauts in the Western Pacific , although even this study does not explicitly address the meaning of “qualitative.” In Weber’s ([1921–-22] 1978) work we find a tension between scientific explanations that are based on observation and quantification and interpretative research (see also Lazarsfeld and Barton 1982 ).

If we look through major sociology journals like the American Sociological Review , American Journal of Sociology , or Social Forces we will not find the term qualitative sociology before the 1970s. And certainly before then much of what we consider qualitative classics in sociology, like Becker’ study ( 1963 ), had already been produced. Indeed, the Chicago School often combined qualitative and quantitative data within the same study (Fine 1995 ). Our point being that before a disciplinary self-awareness the term quantitative preceded qualitative, and the articulation of the former was a political move to claim scientific status (Denzin and Lincoln 2005 ). In the US the World War II seem to have sparked a critique of sociological work, including “qualitative work,” that did not follow the scientific canon (Rawls 2018 ), which was underpinned by a scientifically oriented and value free philosophy of science. As a result the attempts and practice of integrating qualitative and quantitative sociology at Chicago lost ground to sociology that was more oriented to surveys and quantitative work at Columbia under Merton-Lazarsfeld. The quantitative tradition was also able to present textbooks (Lundberg 1951 ) that facilitated the use this approach and its “methods.” The practices of the qualitative tradition, by and large, remained tacit or was part of the mentoring transferred from the renowned masters to their students.

This glimpse into history leads us back to the lack of a coherent account condensed in a definition of qualitative research. Many of the attempts to define the term do not meet the requirements of a proper definition: A definition should be clear, avoid tautology, demarcate its domain in relation to the environment, and ideally only use words in its definiens that themselves are not in need of definition (Hempel 1966 ). A definition can enhance precision and thus clarity by identifying the core of the phenomenon. Preferably, a definition should be short. The typical definition we have found, however, is an ostensive definition, which indicates what qualitative research is about without informing us about what it actually is :

Qualitative research is multimethod in focus, involving an interpretative, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Qualitative research involves the studied use and collection of a variety of empirical materials – case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts – that describe routine and problematic moments and meanings in individuals’ lives. (Denzin and Lincoln 2005 :2)

Flick claims that the label “qualitative research” is indeed used as an umbrella for a number of approaches ( 2007 :2–4; 2002 :6), and it is not difficult to identify research fitting this designation. Moreover, whatever it is, it has grown dramatically over the past five decades. In addition, courses have been developed, methods have flourished, arguments about its future have been advanced (for example, Denzin and Lincoln 1994) and criticized (for example, Snow and Morrill 1995 ), and dedicated journals and books have mushroomed. Most social scientists have a clear idea of research and how it differs from journalism, politics and other activities. But the question of what is qualitative in qualitative research is either eluded or eschewed.

We maintain that this lacuna hinders systematic knowledge production based on qualitative research. Paul Lazarsfeld noted the lack of “codification” as early as 1955 when he reviewed 100 qualitative studies in order to offer a codification of the practices (Lazarsfeld and Barton 1982 :239). Since then many texts on “qualitative research” and its methods have been published, including recent attempts (Goertz and Mahoney 2012 ) similar to Lazarsfeld’s. These studies have tried to extract what is qualitative by looking at the large number of empirical “qualitative” studies. Our novel strategy complements these endeavors by taking another approach and looking at the attempts to codify these practices in the form of a definition, as well as to a minor extent take Becker’s study as an exemplar of what qualitative researchers actually do, and what the characteristic of being ‘qualitative’ denotes and implies. We claim that qualitative researchers, if there is such a thing as “qualitative research,” should be able to codify their practices in a condensed, yet general way expressed in language.

Lingering problems of “generalizability” and “how many cases do I need” (Small 2009 ) are blocking advancement – in this line of work qualitative approaches are said to differ considerably from quantitative ones, while some of the former unsuccessfully mimic principles related to the latter (Small 2009 ). Additionally, quantitative researchers sometimes unfairly criticize the first based on their own quality criteria. Scholars like Goertz and Mahoney ( 2012 ) have successfully focused on the different norms and practices beyond what they argue are essentially two different cultures: those working with either qualitative or quantitative methods. Instead, similarly to Becker ( 2017 ) who has recently questioned the usefulness of the distinction between qualitative and quantitative research, we focus on similarities.

The current situation also impedes both students and researchers in focusing their studies and understanding each other’s work (Lazarsfeld and Barton 1982 :239). A third consequence is providing an opening for critiques by scholars operating within different traditions (Valsiner 2000 :101). A fourth issue is that the “implicit use of methods in qualitative research makes the field far less standardized than the quantitative paradigm” (Goertz and Mahoney 2012 :9). Relatedly, the National Science Foundation in the US organized two workshops in 2004 and 2005 to address the scientific foundations of qualitative research involving strategies to improve it and to develop standards of evaluation in qualitative research. However, a specific focus on its distinguishing feature of being “qualitative” while being implicitly acknowledged, was discussed only briefly (for example, Best 2004 ).

In 2014 a theme issue was published in this journal on “Methods, Materials, and Meanings: Designing Cultural Analysis,” discussing central issues in (cultural) qualitative research (Berezin 2014 ; Biernacki 2014 ; Glaeser 2014 ; Lamont and Swidler 2014 ; Spillman 2014). We agree with many of the arguments put forward, such as the risk of methodological tribalism, and that we should not waste energy on debating methods separated from research questions. Nonetheless, a clarification of the relation to what is called “quantitative research” is of outmost importance to avoid misunderstandings and misguided debates between “qualitative” and “quantitative” researchers. Our strategy means that researchers, “qualitative” or “quantitative” they may be, in their actual practice may combine qualitative work and quantitative work.

In this article we accomplish three tasks. First, we systematically survey the literature for meanings of qualitative research by looking at how researchers have defined it. Drawing upon existing knowledge we find that the different meanings and ideas of qualitative research are not yet coherently integrated into one satisfactory definition. Next, we advance our contribution by offering a definition of qualitative research and illustrate its meaning and use partially by expanding on the brief example introduced earlier related to Becker’s work ( 1963 ). We offer a systematic analysis of central themes of what researchers consider to be the core of “qualitative,” regardless of style of work. These themes – which we summarize in terms of four keywords: distinction, process, closeness, improved understanding – constitute part of our literature review, in which each one appears, sometimes with others, but never all in the same definition. They serve as the foundation of our contribution. Our categories are overlapping. Their use is primarily to organize the large amount of definitions we have identified and analyzed, and not necessarily to draw a clear distinction between them. Finally, we continue the elaboration discussed above on the advantages of a clear definition of qualitative research.

In a hermeneutic fashion we propose that there is something meaningful that deserves to be labelled “qualitative research” (Gadamer 1990 ). To approach the question “What is qualitative in qualitative research?” we have surveyed the literature. In conducting our survey we first traced the word’s etymology in dictionaries, encyclopedias, handbooks of the social sciences and of methods and textbooks, mainly in English, which is common to methodology courses. It should be noted that we have zoomed in on sociology and its literature. This discipline has been the site of the largest debate and development of methods that can be called “qualitative,” which suggests that this field should be examined in great detail.

In an ideal situation we should expect that one good definition, or at least some common ideas, would have emerged over the years. This common core of qualitative research should be so accepted that it would appear in at least some textbooks. Since this is not what we found, we decided to pursue an inductive approach to capture maximal variation in the field of qualitative research; we searched in a selection of handbooks, textbooks, book chapters, and books, to which we added the analysis of journal articles. Our sample comprises a total of 89 references.

In practice we focused on the discipline that has had a clear discussion of methods, namely sociology. We also conducted a broad search in the JSTOR database to identify scholarly sociology articles published between 1998 and 2017 in English with a focus on defining or explaining qualitative research. We specifically zoom in on this time frame because we would have expect that this more mature period would have produced clear discussions on the meaning of qualitative research. To find these articles we combined a number of keywords to search the content and/or the title: qualitative (which was always included), definition, empirical, research, methodology, studies, fieldwork, interview and observation .

As a second phase of our research we searched within nine major sociological journals ( American Journal of Sociology , Sociological Theory , American Sociological Review , Contemporary Sociology , Sociological Forum , Sociological Theory , Qualitative Research , Qualitative Sociology and Qualitative Sociology Review ) for articles also published during the past 19 years (1998–2017) that had the term “qualitative” in the title and attempted to define qualitative research.

Lastly we picked two additional journals, Qualitative Research and Qualitative Sociology , in which we could expect to find texts addressing the notion of “qualitative.” From Qualitative Research we chose Volume 14, Issue 6, December 2014, and from Qualitative Sociology we chose Volume 36, Issue 2, June 2017. Within each of these we selected the first article; then we picked the second article of three prior issues. Again we went back another three issues and investigated article number three. Finally we went back another three issues and perused article number four. This selection criteria was used to get a manageable sample for the analysis.

The coding process of the 89 references we gathered in our selected review began soon after the first round of material was gathered, and we reduced the complexity created by our maximum variation sampling (Snow and Anderson 1993 :22) to four different categories within which questions on the nature and properties of qualitative research were discussed. We call them: Qualitative and Quantitative Research, Qualitative Research, Fieldwork, and Grounded Theory. This – which may appear as an illogical grouping – merely reflects the “context” in which the matter of “qualitative” is discussed. If the selection process of the material – books and articles – was informed by pre-knowledge, we used an inductive strategy to code the material. When studying our material, we identified four central notions related to “qualitative” that appear in various combinations in the literature which indicate what is the core of qualitative research. We have labeled them: “distinctions”, “process,” “closeness,” and “improved understanding.” During the research process the categories and notions were improved, refined, changed, and reordered. The coding ended when a sense of saturation in the material arose. In the presentation below all quotations and references come from our empirical material of texts on qualitative research.

Analysis – What is Qualitative Research?

In this section we describe the four categories we identified in the coding, how they differently discuss qualitative research, as well as their overall content. Some salient quotations are selected to represent the type of text sorted under each of the four categories. What we present are examples from the literature.

Qualitative and Quantitative

This analytic category comprises quotations comparing qualitative and quantitative research, a distinction that is frequently used (Brown 2010 :231); in effect this is a conceptual pair that structures the discussion and that may be associated with opposing interests. While the general goal of quantitative and qualitative research is the same – to understand the world better – their methodologies and focus in certain respects differ substantially (Becker 1966 :55). Quantity refers to that property of something that can be determined by measurement. In a dictionary of Statistics and Methodology we find that “(a) When referring to *variables, ‘qualitative’ is another term for *categorical or *nominal. (b) When speaking of kinds of research, ‘qualitative’ refers to studies of subjects that are hard to quantify, such as art history. Qualitative research tends to be a residual category for almost any kind of non-quantitative research” (Stiles 1998:183). But it should be obvious that one could employ a quantitative approach when studying, for example, art history.

The same dictionary states that quantitative is “said of variables or research that can be handled numerically, usually (too sharply) contrasted with *qualitative variables and research” (Stiles 1998:184). From a qualitative perspective “quantitative research” is about numbers and counting, and from a quantitative perspective qualitative research is everything that is not about numbers. But this does not say much about what is “qualitative.” If we turn to encyclopedias we find that in the 1932 edition of the Encyclopedia of the Social Sciences there is no mention of “qualitative.” In the Encyclopedia from 1968 we can read:

Qualitative Analysis. For methods of obtaining, analyzing, and describing data, see [the various entries:] CONTENT ANALYSIS; COUNTED DATA; EVALUATION RESEARCH, FIELD WORK; GRAPHIC PRESENTATION; HISTORIOGRAPHY, especially the article on THE RHETORIC OF HISTORY; INTERVIEWING; OBSERVATION; PERSONALITY MEASUREMENT; PROJECTIVE METHODS; PSYCHOANALYSIS, article on EXPERIMENTAL METHODS; SURVEY ANALYSIS, TABULAR PRESENTATION; TYPOLOGIES. (Vol. 13:225)

Some, like Alford, divide researchers into methodologists or, in his words, “quantitative and qualitative specialists” (Alford 1998 :12). Qualitative research uses a variety of methods, such as intensive interviews or in-depth analysis of historical materials, and it is concerned with a comprehensive account of some event or unit (King et al. 1994 :4). Like quantitative research it can be utilized to study a variety of issues, but it tends to focus on meanings and motivations that underlie cultural symbols, personal experiences, phenomena and detailed understanding of processes in the social world. In short, qualitative research centers on understanding processes, experiences, and the meanings people assign to things (Kalof et al. 2008 :79).

Others simply say that qualitative methods are inherently unscientific (Jovanović 2011 :19). Hood, for instance, argues that words are intrinsically less precise than numbers, and that they are therefore more prone to subjective analysis, leading to biased results (Hood 2006 :219). Qualitative methodologies have raised concerns over the limitations of quantitative templates (Brady et al. 2004 :4). Scholars such as King et al. ( 1994 ), for instance, argue that non-statistical research can produce more reliable results if researchers pay attention to the rules of scientific inference commonly stated in quantitative research. Also, researchers such as Becker ( 1966 :59; 1970 :42–43) have asserted that, if conducted properly, qualitative research and in particular ethnographic field methods, can lead to more accurate results than quantitative studies, in particular, survey research and laboratory experiments.

Some researchers, such as Kalof, Dan, and Dietz ( 2008 :79) claim that the boundaries between the two approaches are becoming blurred, and Small ( 2009 ) argues that currently much qualitative research (especially in North America) tries unsuccessfully and unnecessarily to emulate quantitative standards. For others, qualitative research tends to be more humanistic and discursive (King et al. 1994 :4). Ragin ( 1994 ), and similarly also Becker, ( 1996 :53), Marchel and Owens ( 2007 :303) think that the main distinction between the two styles is overstated and does not rest on the simple dichotomy of “numbers versus words” (Ragin 1994 :xii). Some claim that quantitative data can be utilized to discover associations, but in order to unveil cause and effect a complex research design involving the use of qualitative approaches needs to be devised (Gilbert 2009 :35). Consequently, qualitative data are useful for understanding the nuances lying beyond those processes as they unfold (Gilbert 2009 :35). Others contend that qualitative research is particularly well suited both to identify causality and to uncover fine descriptive distinctions (Fine and Hallett 2014 ; Lichterman and Isaac Reed 2014 ; Katz 2015 ).

There are other ways to separate these two traditions, including normative statements about what qualitative research should be (that is, better or worse than quantitative approaches, concerned with scientific approaches to societal change or vice versa; Snow and Morrill 1995 ; Denzin and Lincoln 2005 ), or whether it should develop falsifiable statements; Best 2004 ).

We propose that quantitative research is largely concerned with pre-determined variables (Small 2008 ); the analysis concerns the relations between variables. These categories are primarily not questioned in the study, only their frequency or degree, or the correlations between them (cf. Franzosi 2016 ). If a researcher studies wage differences between women and men, he or she works with given categories: x number of men are compared with y number of women, with a certain wage attributed to each person. The idea is not to move beyond the given categories of wage, men and women; they are the starting point as well as the end point, and undergo no “qualitative change.” Qualitative research, in contrast, investigates relations between categories that are themselves subject to change in the research process. Returning to Becker’s study ( 1963 ), we see that he questioned pre-dispositional theories of deviant behavior working with pre-determined variables such as an individual’s combination of personal qualities or emotional problems. His take, in contrast, was to understand marijuana consumption by developing “variables” as part of the investigation. Thereby he presented new variables, or as we would say today, theoretical concepts, but which are grounded in the empirical material.

Qualitative Research

This category contains quotations that refer to descriptions of qualitative research without making comparisons with quantitative research. Researchers such as Denzin and Lincoln, who have written a series of influential handbooks on qualitative methods (1994; Denzin and Lincoln 2003 ; 2005 ), citing Nelson et al. (1992:4), argue that because qualitative research is “interdisciplinary, transdisciplinary, and sometimes counterdisciplinary” it is difficult to derive one single definition of it (Jovanović 2011 :3). According to them, in fact, “the field” is “many things at the same time,” involving contradictions, tensions over its focus, methods, and how to derive interpretations and findings ( 2003 : 11). Similarly, others, such as Flick ( 2007 :ix–x) contend that agreeing on an accepted definition has increasingly become problematic, and that qualitative research has possibly matured different identities. However, Best holds that “the proliferation of many sorts of activities under the label of qualitative sociology threatens to confuse our discussions” ( 2004 :54). Atkinson’s position is more definite: “the current state of qualitative research and research methods is confused” ( 2005 :3–4).

Qualitative research is about interpretation (Blumer 1969 ; Strauss and Corbin 1998 ; Denzin and Lincoln 2003 ), or Verstehen [understanding] (Frankfort-Nachmias and Nachmias 1996 ). It is “multi-method,” involving the collection and use of a variety of empirical materials (Denzin and Lincoln 1998; Silverman 2013 ) and approaches (Silverman 2005 ; Flick 2007 ). It focuses not only on the objective nature of behavior but also on its subjective meanings: individuals’ own accounts of their attitudes, motivations, behavior (McIntyre 2005 :127; Creswell 2009 ), events and situations (Bryman 1989) – what people say and do in specific places and institutions (Goodwin and Horowitz 2002 :35–36) in social and temporal contexts (Morrill and Fine 1997). For this reason, following Weber ([1921-22] 1978), it can be described as an interpretative science (McIntyre 2005 :127). But could quantitative research also be concerned with these questions? Also, as pointed out below, does all qualitative research focus on subjective meaning, as some scholars suggest?

Others also distinguish qualitative research by claiming that it collects data using a naturalistic approach (Denzin and Lincoln 2005 :2; Creswell 2009 ), focusing on the meaning actors ascribe to their actions. But again, does all qualitative research need to be collected in situ? And does qualitative research have to be inherently concerned with meaning? Flick ( 2007 ), referring to Denzin and Lincoln ( 2005 ), mentions conversation analysis as an example of qualitative research that is not concerned with the meanings people bring to a situation, but rather with the formal organization of talk. Still others, such as Ragin ( 1994 :85), note that qualitative research is often (especially early on in the project, we would add) less structured than other kinds of social research – a characteristic connected to its flexibility and that can lead both to potentially better, but also worse results. But is this not a feature of this type of research, rather than a defining description of its essence? Wouldn’t this comment also apply, albeit to varying degrees, to quantitative research?

In addition, Strauss ( 2003 ), along with others, such as Alvesson and Kärreman ( 2011 :10–76), argue that qualitative researchers struggle to capture and represent complex phenomena partially because they tend to collect a large amount of data. While his analysis is correct at some points – “It is necessary to do detailed, intensive, microscopic examination of the data in order to bring out the amazing complexity of what lies in, behind, and beyond those data” (Strauss 2003 :10) – much of his analysis concerns the supposed focus of qualitative research and its challenges, rather than exactly what it is about. But even in this instance we would make a weak case arguing that these are strictly the defining features of qualitative research. Some researchers seem to focus on the approach or the methods used, or even on the way material is analyzed. Several researchers stress the naturalistic assumption of investigating the world, suggesting that meaning and interpretation appear to be a core matter of qualitative research.

We can also see that in this category there is no consensus about specific qualitative methods nor about qualitative data. Many emphasize interpretation, but quantitative research, too, involves interpretation; the results of a regression analysis, for example, certainly have to be interpreted, and the form of meta-analysis that factor analysis provides indeed requires interpretation However, there is no interpretation of quantitative raw data, i.e., numbers in tables. One common thread is that qualitative researchers have to get to grips with their data in order to understand what is being studied in great detail, irrespective of the type of empirical material that is being analyzed. This observation is connected to the fact that qualitative researchers routinely make several adjustments of focus and research design as their studies progress, in many cases until the very end of the project (Kalof et al. 2008 ). If you, like Becker, do not start out with a detailed theory, adjustments such as the emergence and refinement of research questions will occur during the research process. We have thus found a number of useful reflections about qualitative research scattered across different sources, but none of them effectively describe the defining characteristics of this approach.

Although qualitative research does not appear to be defined in terms of a specific method, it is certainly common that fieldwork, i.e., research that entails that the researcher spends considerable time in the field that is studied and use the knowledge gained as data, is seen as emblematic of or even identical to qualitative research. But because we understand that fieldwork tends to focus primarily on the collection and analysis of qualitative data, we expected to find within it discussions on the meaning of “qualitative.” But, again, this was not the case.

Instead, we found material on the history of this approach (for example, Frankfort-Nachmias and Nachmias 1996 ; Atkinson et al. 2001), including how it has changed; for example, by adopting a more self-reflexive practice (Heyl 2001), as well as the different nomenclature that has been adopted, such as fieldwork, ethnography, qualitative research, naturalistic research, participant observation and so on (for example, Lofland et al. 2006 ; Gans 1999 ).

We retrieved definitions of ethnography, such as “the study of people acting in the natural courses of their daily lives,” involving a “resocialization of the researcher” (Emerson 1988 :1) through intense immersion in others’ social worlds (see also examples in Hammersley 2018 ). This may be accomplished by direct observation and also participation (Neuman 2007 :276), although others, such as Denzin ( 1970 :185), have long recognized other types of observation, including non-participant (“fly on the wall”). In this category we have also isolated claims and opposing views, arguing that this type of research is distinguished primarily by where it is conducted (natural settings) (Hughes 1971:496), and how it is carried out (a variety of methods are applied) or, for some most importantly, by involving an active, empathetic immersion in those being studied (Emerson 1988 :2). We also retrieved descriptions of the goals it attends in relation to how it is taught (understanding subjective meanings of the people studied, primarily develop theory, or contribute to social change) (see for example, Corte and Irwin 2017 ; Frankfort-Nachmias and Nachmias 1996 :281; Trier-Bieniek 2012 :639) by collecting the richest possible data (Lofland et al. 2006 ) to derive “thick descriptions” (Geertz 1973 ), and/or to aim at theoretical statements of general scope and applicability (for example, Emerson 1988 ; Fine 2003 ). We have identified guidelines on how to evaluate it (for example Becker 1996 ; Lamont 2004 ) and have retrieved instructions on how it should be conducted (for example, Lofland et al. 2006 ). For instance, analysis should take place while the data gathering unfolds (Emerson 1988 ; Hammersley and Atkinson 2007 ; Lofland et al. 2006 ), observations should be of long duration (Becker 1970 :54; Goffman 1989 ), and data should be of high quantity (Becker 1970 :52–53), as well as other questionable distinctions between fieldwork and other methods:

Field studies differ from other methods of research in that the researcher performs the task of selecting topics, decides what questions to ask, and forges interest in the course of the research itself . This is in sharp contrast to many ‘theory-driven’ and ‘hypothesis-testing’ methods. (Lofland and Lofland 1995 :5)

But could not, for example, a strictly interview-based study be carried out with the same amount of flexibility, such as sequential interviewing (for example, Small 2009 )? Once again, are quantitative approaches really as inflexible as some qualitative researchers think? Moreover, this category stresses the role of the actors’ meaning, which requires knowledge and close interaction with people, their practices and their lifeworld.

It is clear that field studies – which are seen by some as the “gold standard” of qualitative research – are nonetheless only one way of doing qualitative research. There are other methods, but it is not clear why some are more qualitative than others, or why they are better or worse. Fieldwork is characterized by interaction with the field (the material) and understanding of the phenomenon that is being studied. In Becker’s case, he had general experience from fields in which marihuana was used, based on which he did interviews with actual users in several fields.

Grounded Theory

Another major category we identified in our sample is Grounded Theory. We found descriptions of it most clearly in Glaser and Strauss’ ([1967] 2010 ) original articulation, Strauss and Corbin ( 1998 ) and Charmaz ( 2006 ), as well as many other accounts of what it is for: generating and testing theory (Strauss 2003 :xi). We identified explanations of how this task can be accomplished – such as through two main procedures: constant comparison and theoretical sampling (Emerson 1998:96), and how using it has helped researchers to “think differently” (for example, Strauss and Corbin 1998 :1). We also read descriptions of its main traits, what it entails and fosters – for instance, an exceptional flexibility, an inductive approach (Strauss and Corbin 1998 :31–33; 1990; Esterberg 2002 :7), an ability to step back and critically analyze situations, recognize tendencies towards bias, think abstractly and be open to criticism, enhance sensitivity towards the words and actions of respondents, and develop a sense of absorption and devotion to the research process (Strauss and Corbin 1998 :5–6). Accordingly, we identified discussions of the value of triangulating different methods (both using and not using grounded theory), including quantitative ones, and theories to achieve theoretical development (most comprehensively in Denzin 1970 ; Strauss and Corbin 1998 ; Timmermans and Tavory 2012 ). We have also located arguments about how its practice helps to systematize data collection, analysis and presentation of results (Glaser and Strauss [1967] 2010 :16).

Grounded theory offers a systematic approach which requires researchers to get close to the field; closeness is a requirement of identifying questions and developing new concepts or making further distinctions with regard to old concepts. In contrast to other qualitative approaches, grounded theory emphasizes the detailed coding process, and the numerous fine-tuned distinctions that the researcher makes during the process. Within this category, too, we could not find a satisfying discussion of the meaning of qualitative research.

Defining Qualitative Research

In sum, our analysis shows that some notions reappear in the discussion of qualitative research, such as understanding, interpretation, “getting close” and making distinctions. These notions capture aspects of what we think is “qualitative.” However, a comprehensive definition that is useful and that can further develop the field is lacking, and not even a clear picture of its essential elements appears. In other words no definition emerges from our data, and in our research process we have moved back and forth between our empirical data and the attempt to present a definition. Our concrete strategy, as stated above, is to relate qualitative and quantitative research, or more specifically, qualitative and quantitative work. We use an ideal-typical notion of quantitative research which relies on taken for granted and numbered variables. This means that the data consists of variables on different scales, such as ordinal, but frequently ratio and absolute scales, and the representation of the numbers to the variables, i.e. the justification of the assignment of numbers to object or phenomenon, are not questioned, though the validity may be questioned. In this section we return to the notion of quality and try to clarify it while presenting our contribution.

Broadly, research refers to the activity performed by people trained to obtain knowledge through systematic procedures. Notions such as “objectivity” and “reflexivity,” “systematic,” “theory,” “evidence” and “openness” are here taken for granted in any type of research. Next, building on our empirical analysis we explain the four notions that we have identified as central to qualitative work: distinctions, process, closeness, and improved understanding. In discussing them, ultimately in relation to one another, we make their meaning even more precise. Our idea, in short, is that only when these ideas that we present separately for analytic purposes are brought together can we speak of qualitative research.

Distinctions

We believe that the possibility of making new distinctions is one the defining characteristics of qualitative research. It clearly sets it apart from quantitative analysis which works with taken-for-granted variables, albeit as mentioned, meta-analyses, for example, factor analysis may result in new variables. “Quality” refers essentially to distinctions, as already pointed out by Aristotle. He discusses the term “qualitative” commenting: “By a quality I mean that in virtue of which things are said to be qualified somehow” (Aristotle 1984:14). Quality is about what something is or has, which means that the distinction from its environment is crucial. We see qualitative research as a process in which significant new distinctions are made to the scholarly community; to make distinctions is a key aspect of obtaining new knowledge; a point, as we will see, that also has implications for “quantitative research.” The notion of being “significant” is paramount. New distinctions by themselves are not enough; just adding concepts only increases complexity without furthering our knowledge. The significance of new distinctions is judged against the communal knowledge of the research community. To enable this discussion and judgements central elements of rational discussion are required (cf. Habermas [1981] 1987 ; Davidsson [ 1988 ] 2001) to identify what is new and relevant scientific knowledge. Relatedly, Ragin alludes to the idea of new and useful knowledge at a more concrete level: “Qualitative methods are appropriate for in-depth examination of cases because they aid the identification of key features of cases. Most qualitative methods enhance data” (1994:79). When Becker ( 1963 ) studied deviant behavior and investigated how people became marihuana smokers, he made distinctions between the ways in which people learned how to smoke. This is a classic example of how the strategy of “getting close” to the material, for example the text, people or pictures that are subject to analysis, may enable researchers to obtain deeper insight and new knowledge by making distinctions – in this instance on the initial notion of learning how to smoke. Others have stressed the making of distinctions in relation to coding or theorizing. Emerson et al. ( 1995 ), for example, hold that “qualitative coding is a way of opening up avenues of inquiry,” meaning that the researcher identifies and develops concepts and analytic insights through close examination of and reflection on data (Emerson et al. 1995 :151). Goodwin and Horowitz highlight making distinctions in relation to theory-building writing: “Close engagement with their cases typically requires qualitative researchers to adapt existing theories or to make new conceptual distinctions or theoretical arguments to accommodate new data” ( 2002 : 37). In the ideal-typical quantitative research only existing and so to speak, given, variables would be used. If this is the case no new distinction are made. But, would not also many “quantitative” researchers make new distinctions?

Process does not merely suggest that research takes time. It mainly implies that qualitative new knowledge results from a process that involves several phases, and above all iteration. Qualitative research is about oscillation between theory and evidence, analysis and generating material, between first- and second -order constructs (Schütz 1962 :59), between getting in contact with something, finding sources, becoming deeply familiar with a topic, and then distilling and communicating some of its essential features. The main point is that the categories that the researcher uses, and perhaps takes for granted at the beginning of the research process, usually undergo qualitative changes resulting from what is found. Becker describes how he tested hypotheses and let the jargon of the users develop into theoretical concepts. This happens over time while the study is being conducted, exemplifying what we mean by process.

In the research process, a pilot-study may be used to get a first glance of, for example, the field, how to approach it, and what methods can be used, after which the method and theory are chosen or refined before the main study begins. Thus, the empirical material is often central from the start of the project and frequently leads to adjustments by the researcher. Likewise, during the main study categories are not fixed; the empirical material is seen in light of the theory used, but it is also given the opportunity to kick back, thereby resisting attempts to apply theoretical straightjackets (Becker 1970 :43). In this process, coding and analysis are interwoven, and thus are often important steps for getting closer to the phenomenon and deciding what to focus on next. Becker began his research by interviewing musicians close to him, then asking them to refer him to other musicians, and later on doubling his original sample of about 25 to include individuals in other professions (Becker 1973:46). Additionally, he made use of some participant observation, documents, and interviews with opiate users made available to him by colleagues. As his inductive theory of deviance evolved, Becker expanded his sample in order to fine tune it, and test the accuracy and generality of his hypotheses. In addition, he introduced a negative case and discussed the null hypothesis ( 1963 :44). His phasic career model is thus based on a research design that embraces processual work. Typically, process means to move between “theory” and “material” but also to deal with negative cases, and Becker ( 1998 ) describes how discovering these negative cases impacted his research design and ultimately its findings.

Obviously, all research is process-oriented to some degree. The point is that the ideal-typical quantitative process does not imply change of the data, and iteration between data, evidence, hypotheses, empirical work, and theory. The data, quantified variables, are, in most cases fixed. Merging of data, which of course can be done in a quantitative research process, does not mean new data. New hypotheses are frequently tested, but the “raw data is often the “the same.” Obviously, over time new datasets are made available and put into use.

Another characteristic that is emphasized in our sample is that qualitative researchers – and in particular ethnographers – can, or as Goffman put it, ought to ( 1989 ), get closer to the phenomenon being studied and their data than quantitative researchers (for example, Silverman 2009 :85). Put differently, essentially because of their methods qualitative researchers get into direct close contact with those being investigated and/or the material, such as texts, being analyzed. Becker started out his interview study, as we noted, by talking to those he knew in the field of music to get closer to the phenomenon he was studying. By conducting interviews he got even closer. Had he done more observations, he would undoubtedly have got even closer to the field.

Additionally, ethnographers’ design enables researchers to follow the field over time, and the research they do is almost by definition longitudinal, though the time in the field is studied obviously differs between studies. The general characteristic of closeness over time maximizes the chances of unexpected events, new data (related, for example, to archival research as additional sources, and for ethnography for situations not necessarily previously thought of as instrumental – what Mannay and Morgan ( 2015 ) term the “waiting field”), serendipity (Merton and Barber 2004 ; Åkerström 2013 ), and possibly reactivity, as well as the opportunity to observe disrupted patterns that translate into exemplars of negative cases. Two classic examples of this are Becker’s finding of what medical students call “crocks” (Becker et al. 1961 :317), and Geertz’s ( 1973 ) study of “deep play” in Balinese society.

By getting and staying so close to their data – be it pictures, text or humans interacting (Becker was himself a musician) – for a long time, as the research progressively focuses, qualitative researchers are prompted to continually test their hunches, presuppositions and hypotheses. They test them against a reality that often (but certainly not always), and practically, as well as metaphorically, talks back, whether by validating them, or disqualifying their premises – correctly, as well as incorrectly (Fine 2003 ; Becker 1970 ). This testing nonetheless often leads to new directions for the research. Becker, for example, says that he was initially reading psychological theories, but when facing the data he develops a theory that looks at, you may say, everything but psychological dispositions to explain the use of marihuana. Especially researchers involved with ethnographic methods have a fairly unique opportunity to dig up and then test (in a circular, continuous and temporal way) new research questions and findings as the research progresses, and thereby to derive previously unimagined and uncharted distinctions by getting closer to the phenomenon under study.

Let us stress that getting close is by no means restricted to ethnography. The notion of hermeneutic circle and hermeneutics as a general way of understanding implies that we must get close to the details in order to get the big picture. This also means that qualitative researchers can literally also make use of details of pictures as evidence (cf. Harper 2002). Thus, researchers may get closer both when generating the material or when analyzing it.

Quantitative research, we maintain, in the ideal-typical representation cannot get closer to the data. The data is essentially numbers in tables making up the variables (Franzosi 2016 :138). The data may originally have been “qualitative,” but once reduced to numbers there can only be a type of “hermeneutics” about what the number may stand for. The numbers themselves, however, are non-ambiguous. Thus, in quantitative research, interpretation, if done, is not about the data itself—the numbers—but what the numbers stand for. It follows that the interpretation is essentially done in a more “speculative” mode without direct empirical evidence (cf. Becker 2017 ).

Improved Understanding

While distinction, process and getting closer refer to the qualitative work of the researcher, improved understanding refers to its conditions and outcome of this work. Understanding cuts deeper than explanation, which to some may mean a causally verified correlation between variables. The notion of explanation presupposes the notion of understanding since explanation does not include an idea of how knowledge is gained (Manicas 2006 : 15). Understanding, we argue, is the core concept of what we call the outcome of the process when research has made use of all the other elements that were integrated in the research. Understanding, then, has a special status in qualitative research since it refers both to the conditions of knowledge and the outcome of the process. Understanding can to some extent be seen as the condition of explanation and occurs in a process of interpretation, which naturally refers to meaning (Gadamer 1990 ). It is fundamentally connected to knowing, and to the knowing of how to do things (Heidegger [1927] 2001 ). Conceptually the term hermeneutics is used to account for this process. Heidegger ties hermeneutics to human being and not possible to separate from the understanding of being ( 1988 ). Here we use it in a broader sense, and more connected to method in general (cf. Seiffert 1992 ). The abovementioned aspects – for example, “objectivity” and “reflexivity” – of the approach are conditions of scientific understanding. Understanding is the result of a circular process and means that the parts are understood in light of the whole, and vice versa. Understanding presupposes pre-understanding, or in other words, some knowledge of the phenomenon studied. The pre-understanding, even in the form of prejudices, are in qualitative research process, which we see as iterative, questioned, which gradually or suddenly change due to the iteration of data, evidence and concepts. However, qualitative research generates understanding in the iterative process when the researcher gets closer to the data, e.g., by going back and forth between field and analysis in a process that generates new data that changes the evidence, and, ultimately, the findings. Questioning, to ask questions, and put what one assumes—prejudices and presumption—in question, is central to understand something (Heidegger [1927] 2001 ; Gadamer 1990 :368–384). We propose that this iterative process in which the process of understanding occurs is characteristic of qualitative research.

Improved understanding means that we obtain scientific knowledge of something that we as a scholarly community did not know before, or that we get to know something better. It means that we understand more about how parts are related to one another, and to other things we already understand (see also Fine and Hallett 2014 ). Understanding is an important condition for qualitative research. It is not enough to identify correlations, make distinctions, and work in a process in which one gets close to the field or phenomena. Understanding is accomplished when the elements are integrated in an iterative process.

It is, moreover, possible to understand many things, and researchers, just like children, may come to understand new things every day as they engage with the world. This subjective condition of understanding – namely, that a person gains a better understanding of something –is easily met. To be qualified as “scientific,” the understanding must be general and useful to many; it must be public. But even this generally accessible understanding is not enough in order to speak of “scientific understanding.” Though we as a collective can increase understanding of everything in virtually all potential directions as a result also of qualitative work, we refrain from this “objective” way of understanding, which has no means of discriminating between what we gain in understanding. Scientific understanding means that it is deemed relevant from the scientific horizon (compare Schütz 1962 : 35–38, 46, 63), and that it rests on the pre-understanding that the scientists have and must have in order to understand. In other words, the understanding gained must be deemed useful by other researchers, so that they can build on it. We thus see understanding from a pragmatic, rather than a subjective or objective perspective. Improved understanding is related to the question(s) at hand. Understanding, in order to represent an improvement, must be an improvement in relation to the existing body of knowledge of the scientific community (James [ 1907 ] 1955). Scientific understanding is, by definition, collective, as expressed in Weber’s famous note on objectivity, namely that scientific work aims at truths “which … can claim, even for a Chinese, the validity appropriate to an empirical analysis” ([1904] 1949 :59). By qualifying “improved understanding” we argue that it is a general defining characteristic of qualitative research. Becker‘s ( 1966 ) study and other research of deviant behavior increased our understanding of the social learning processes of how individuals start a behavior. And it also added new knowledge about the labeling of deviant behavior as a social process. Few studies, of course, make the same large contribution as Becker’s, but are nonetheless qualitative research.

Understanding in the phenomenological sense, which is a hallmark of qualitative research, we argue, requires meaning and this meaning is derived from the context, and above all the data being analyzed. The ideal-typical quantitative research operates with given variables with different numbers. This type of material is not enough to establish meaning at the level that truly justifies understanding. In other words, many social science explanations offer ideas about correlations or even causal relations, but this does not mean that the meaning at the level of the data analyzed, is understood. This leads us to say that there are indeed many explanations that meet the criteria of understanding, for example the explanation of how one becomes a marihuana smoker presented by Becker. However, we may also understand a phenomenon without explaining it, and we may have potential explanations, or better correlations, that are not really understood.

We may speak more generally of quantitative research and its data to clarify what we see as an important distinction. The “raw data” that quantitative research—as an idealtypical activity, refers to is not available for further analysis; the numbers, once created, are not to be questioned (Franzosi 2016 : 138). If the researcher is to do “more” or “change” something, this will be done by conjectures based on theoretical knowledge or based on the researcher’s lifeworld. Both qualitative and quantitative research is based on the lifeworld, and all researchers use prejudices and pre-understanding in the research process. This idea is present in the works of Heidegger ( 2001 ) and Heisenberg (cited in Franzosi 2010 :619). Qualitative research, as we argued, involves the interaction and questioning of concepts (theory), data, and evidence.

Ragin ( 2004 :22) points out that “a good definition of qualitative research should be inclusive and should emphasize its key strengths and features, not what it lacks (for example, the use of sophisticated quantitative techniques).” We define qualitative research as an iterative process in which improved understanding to the scientific community is achieved by making new significant distinctions resulting from getting closer to the phenomenon studied. Qualitative research, as defined here, is consequently a combination of two criteria: (i) how to do things –namely, generating and analyzing empirical material, in an iterative process in which one gets closer by making distinctions, and (ii) the outcome –improved understanding novel to the scholarly community. Is our definition applicable to our own study? In this study we have closely read the empirical material that we generated, and the novel distinction of the notion “qualitative research” is the outcome of an iterative process in which both deduction and induction were involved, in which we identified the categories that we analyzed. We thus claim to meet the first criteria, “how to do things.” The second criteria cannot be judged but in a partial way by us, namely that the “outcome” —in concrete form the definition-improves our understanding to others in the scientific community.

We have defined qualitative research, or qualitative scientific work, in relation to quantitative scientific work. Given this definition, qualitative research is about questioning the pre-given (taken for granted) variables, but it is thus also about making new distinctions of any type of phenomenon, for example, by coining new concepts, including the identification of new variables. This process, as we have discussed, is carried out in relation to empirical material, previous research, and thus in relation to theory. Theory and previous research cannot be escaped or bracketed. According to hermeneutic principles all scientific work is grounded in the lifeworld, and as social scientists we can thus never fully bracket our pre-understanding.

We have proposed that quantitative research, as an idealtype, is concerned with pre-determined variables (Small 2008 ). Variables are epistemically fixed, but can vary in terms of dimensions, such as frequency or number. Age is an example; as a variable it can take on different numbers. In relation to quantitative research, qualitative research does not reduce its material to number and variables. If this is done the process of comes to a halt, the researcher gets more distanced from her data, and it makes it no longer possible to make new distinctions that increase our understanding. We have above discussed the components of our definition in relation to quantitative research. Our conclusion is that in the research that is called quantitative there are frequent and necessary qualitative elements.

Further, comparative empirical research on researchers primarily working with ”quantitative” approaches and those working with ”qualitative” approaches, we propose, would perhaps show that there are many similarities in practices of these two approaches. This is not to deny dissimilarities, or the different epistemic and ontic presuppositions that may be more or less strongly associated with the two different strands (see Goertz and Mahoney 2012 ). Our point is nonetheless that prejudices and preconceptions about researchers are unproductive, and that as other researchers have argued, differences may be exaggerated (e.g., Becker 1996 : 53, 2017 ; Marchel and Owens 2007 :303; Ragin 1994 ), and that a qualitative dimension is present in both kinds of work.

Several things follow from our findings. The most important result is the relation to quantitative research. In our analysis we have separated qualitative research from quantitative research. The point is not to label individual researchers, methods, projects, or works as either “quantitative” or “qualitative.” By analyzing, i.e., taking apart, the notions of quantitative and qualitative, we hope to have shown the elements of qualitative research. Our definition captures the elements, and how they, when combined in practice, generate understanding. As many of the quotations we have used suggest, one conclusion of our study holds that qualitative approaches are not inherently connected with a specific method. Put differently, none of the methods that are frequently labelled “qualitative,” such as interviews or participant observation, are inherently “qualitative.” What matters, given our definition, is whether one works qualitatively or quantitatively in the research process, until the results are produced. Consequently, our analysis also suggests that those researchers working with what in the literature and in jargon is often called “quantitative research” are almost bound to make use of what we have identified as qualitative elements in any research project. Our findings also suggest that many” quantitative” researchers, at least to some extent, are engaged with qualitative work, such as when research questions are developed, variables are constructed and combined, and hypotheses are formulated. Furthermore, a research project may hover between “qualitative” and “quantitative” or start out as “qualitative” and later move into a “quantitative” (a distinct strategy that is not similar to “mixed methods” or just simply combining induction and deduction). More generally speaking, the categories of “qualitative” and “quantitative,” unfortunately, often cover up practices, and it may lead to “camps” of researchers opposing one another. For example, regardless of the researcher is primarily oriented to “quantitative” or “qualitative” research, the role of theory is neglected (cf. Swedberg 2017 ). Our results open up for an interaction not characterized by differences, but by different emphasis, and similarities.

Let us take two examples to briefly indicate how qualitative elements can fruitfully be combined with quantitative. Franzosi ( 2010 ) has discussed the relations between quantitative and qualitative approaches, and more specifically the relation between words and numbers. He analyzes texts and argues that scientific meaning cannot be reduced to numbers. Put differently, the meaning of the numbers is to be understood by what is taken for granted, and what is part of the lifeworld (Schütz 1962 ). Franzosi shows how one can go about using qualitative and quantitative methods and data to address scientific questions analyzing violence in Italy at the time when fascism was rising (1919–1922). Aspers ( 2006 ) studied the meaning of fashion photographers. He uses an empirical phenomenological approach, and establishes meaning at the level of actors. In a second step this meaning, and the different ideal-typical photographers constructed as a result of participant observation and interviews, are tested using quantitative data from a database; in the first phase to verify the different ideal-types, in the second phase to use these types to establish new knowledge about the types. In both of these cases—and more examples can be found—authors move from qualitative data and try to keep the meaning established when using the quantitative data.

A second main result of our study is that a definition, and we provided one, offers a way for research to clarify, and even evaluate, what is done. Hence, our definition can guide researchers and students, informing them on how to think about concrete research problems they face, and to show what it means to get closer in a process in which new distinctions are made. The definition can also be used to evaluate the results, given that it is a standard of evaluation (cf. Hammersley 2007 ), to see whether new distinctions are made and whether this improves our understanding of what is researched, in addition to the evaluation of how the research was conducted. By making what is qualitative research explicit it becomes easier to communicate findings, and it is thereby much harder to fly under the radar with substandard research since there are standards of evaluation which make it easier to separate “good” from “not so good” qualitative research.

To conclude, our analysis, which ends with a definition of qualitative research can thus both address the “internal” issues of what is qualitative research, and the “external” critiques that make it harder to do qualitative research, to which both pressure from quantitative methods and general changes in society contribute.

Acknowledgements

Financial Support for this research is given by the European Research Council, CEV (263699). The authors are grateful to Susann Krieglsteiner for assistance in collecting the data. The paper has benefitted from the many useful comments by the three reviewers and the editor, comments by members of the Uppsala Laboratory of Economic Sociology, as well as Jukka Gronow, Sebastian Kohl, Marcin Serafin, Richard Swedberg, Anders Vassenden and Turid Rødne.

Biographies

is professor of sociology at the Department of Sociology, Uppsala University and Universität St. Gallen. His main focus is economic sociology, and in particular, markets. He has published numerous articles and books, including Orderly Fashion (Princeton University Press 2010), Markets (Polity Press 2011) and Re-Imagining Economic Sociology (edited with N. Dodd, Oxford University Press 2015). His book Ethnographic Methods (in Swedish) has already gone through several editions.

is associate professor of sociology at the Department of Media and Social Sciences, University of Stavanger. His research has been published in journals such as Social Psychology Quarterly, Sociological Theory, Teaching Sociology, and Music and Arts in Action. As an ethnographer he is working on a book on he social world of big-wave surfing.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Patrik Aspers, Email: [email protected] .

Ugo Corte, Email: [email protected] .

  • Åkerström M. Curiosity and serendipity in qualitative research. Qualitative Sociology Review. 2013; 9 (2):10–18. [ Google Scholar ]
  • Alford, Robert R. 1998. The craft of inquiry. Theories, methods, evidence . Oxford: Oxford University Press.
  • Alvesson M, Kärreman D. Qualitative research and theory development . Mystery as method . London: SAGE Publications; 2011. [ Google Scholar ]
  • Aspers, Patrik. 2006. Markets in Fashion, A Phenomenological Approach. London Routledge.
  • Atkinson P. Qualitative research. Unity and diversity. Forum: Qualitative Social Research. 2005; 6 (3):1–15. [ Google Scholar ]
  • Becker HS. Outsiders. Studies in the sociology of deviance . New York: The Free Press; 1963. [ Google Scholar ]
  • Becker HS. Whose side are we on? Social Problems. 1966; 14 (3):239–247. [ Google Scholar ]
  • Becker HS. Sociological work. Method and substance. New Brunswick: Transaction Books; 1970. [ Google Scholar ]
  • Becker HS. The epistemology of qualitative research. In: Richard J, Anne C, Shweder RA, editors. Ethnography and human development. Context and meaning in social inquiry. Chicago: University of Chicago Press; 1996. pp. 53–71. [ Google Scholar ]
  • Becker HS. Tricks of the trade. How to think about your research while you're doing it. Chicago: University of Chicago Press; 1998. [ Google Scholar ]
  • Becker, Howard S. 2017. Evidence . Chigaco: University of Chicago Press.
  • Becker H, Geer B, Hughes E, Strauss A. Boys in White, student culture in medical school. New Brunswick: Transaction Publishers; 1961. [ Google Scholar ]
  • Berezin M. How do we know what we mean? Epistemological dilemmas in cultural sociology. Qualitative Sociology. 2014; 37 (2):141–151. [ Google Scholar ]
  • Best, Joel. 2004. Defining qualitative research. In Workshop on Scientific Foundations of Qualitative Research , eds . Charles, Ragin, Joanne, Nagel, and Patricia White, 53-54. http://www.nsf.gov/pubs/2004/nsf04219/nsf04219.pdf .
  • Biernacki R. Humanist interpretation versus coding text samples. Qualitative Sociology. 2014; 37 (2):173–188. [ Google Scholar ]
  • Blumer H. Symbolic interactionism: Perspective and method. Berkeley: University of California Press; 1969. [ Google Scholar ]
  • Brady H, Collier D, Seawright J. Refocusing the discussion of methodology. In: Henry B, David C, editors. Rethinking social inquiry. Diverse tools, shared standards. Lanham: Rowman and Littlefield; 2004. pp. 3–22. [ Google Scholar ]
  • Brown AP. Qualitative method and compromise in applied social research. Qualitative Research. 2010; 10 (2):229–248. [ Google Scholar ]
  • Charmaz K. Constructing grounded theory. London: Sage; 2006. [ Google Scholar ]
  • Corte, Ugo, and Katherine Irwin. 2017. “The Form and Flow of Teaching Ethnographic Knowledge: Hands-on Approaches for Learning Epistemology” Teaching Sociology 45(3): 209-219.
  • Creswell JW. Research design. Qualitative, quantitative, and mixed method approaches. 3. Thousand Oaks: SAGE Publications; 2009. [ Google Scholar ]
  • Davidsson D. The myth of the subjective. In: Davidsson D, editor. Subjective, intersubjective, objective. Oxford: Oxford University Press; 1988. pp. 39–52. [ Google Scholar ]
  • Denzin NK. The research act: A theoretical introduction to Ssociological methods. Chicago: Aldine Publishing Company Publishers; 1970. [ Google Scholar ]
  • Denzin NK, Lincoln YS. Introduction. The discipline and practice of qualitative research. In: Denzin NK, Lincoln YS, editors. Collecting and interpreting qualitative materials. Thousand Oaks: SAGE Publications; 2003. pp. 1–45. [ Google Scholar ]
  • Denzin NK, Lincoln YS. Introduction. The discipline and practice of qualitative research. In: Denzin NK, Lincoln YS, editors. The Sage handbook of qualitative research. Thousand Oaks: SAGE Publications; 2005. pp. 1–32. [ Google Scholar ]
  • Emerson RM, editor. Contemporary field research. A collection of readings. Prospect Heights: Waveland Press; 1988. [ Google Scholar ]
  • Emerson RM, Fretz RI, Shaw LL. Writing ethnographic fieldnotes. Chicago: University of Chicago Press; 1995. [ Google Scholar ]
  • Esterberg KG. Qualitative methods in social research. Boston: McGraw-Hill; 2002. [ Google Scholar ]
  • Fine, Gary Alan. 1995. Review of “handbook of qualitative research.” Contemporary Sociology 24 (3): 416–418.
  • Fine, Gary Alan. 2003. “ Toward a Peopled Ethnography: Developing Theory from Group Life.” Ethnography . 4(1):41-60.
  • Fine GA, Hancock BH. The new ethnographer at work. Qualitative Research. 2017; 17 (2):260–268. [ Google Scholar ]
  • Fine GA, Hallett T. Stranger and stranger: Creating theory through ethnographic distance and authority. Journal of Organizational Ethnography. 2014; 3 (2):188–203. [ Google Scholar ]
  • Flick U. Qualitative research. State of the art. Social Science Information. 2002; 41 (1):5–24. [ Google Scholar ]
  • Flick U. Designing qualitative research. London: SAGE Publications; 2007. [ Google Scholar ]
  • Frankfort-Nachmias C, Nachmias D. Research methods in the social sciences. 5. London: Edward Arnold; 1996. [ Google Scholar ]
  • Franzosi R. Sociology, narrative, and the quality versus quantity debate (Goethe versus Newton): Can computer-assisted story grammars help us understand the rise of Italian fascism (1919- 1922)? Theory and Society. 2010; 39 (6):593–629. [ Google Scholar ]
  • Franzosi R. From method and measurement to narrative and number. International journal of social research methodology. 2016; 19 (1):137–141. [ Google Scholar ]
  • Gadamer, Hans-Georg. 1990. Wahrheit und Methode, Grundzüge einer philosophischen Hermeneutik . Band 1, Hermeneutik. Tübingen: J.C.B. Mohr.
  • Gans H. Participant Observation in an Age of “Ethnography” Journal of Contemporary Ethnography. 1999; 28 (5):540–548. [ Google Scholar ]
  • Geertz C. The interpretation of cultures. New York: Basic Books; 1973. [ Google Scholar ]
  • Gilbert N. Researching social life. 3. London: SAGE Publications; 2009. [ Google Scholar ]
  • Glaeser A. Hermeneutic institutionalism: Towards a new synthesis. Qualitative Sociology. 2014; 37 :207–241. [ Google Scholar ]
  • Glaser, Barney G., and Anselm L. Strauss. [1967] 2010. The discovery of grounded theory. Strategies for qualitative research. Hawthorne: Aldine.
  • Goertz G, Mahoney J. A tale of two cultures: Qualitative and quantitative research in the social sciences. Princeton: Princeton University Press; 2012. [ Google Scholar ]
  • Goffman E. On fieldwork. Journal of Contemporary Ethnography. 1989; 18 (2):123–132. [ Google Scholar ]
  • Goodwin J, Horowitz R. Introduction. The methodological strengths and dilemmas of qualitative sociology. Qualitative Sociology. 2002; 25 (1):33–47. [ Google Scholar ]
  • Habermas, Jürgen. [1981] 1987. The theory of communicative action . Oxford: Polity Press.
  • Hammersley M. The issue of quality in qualitative research. International Journal of Research & Method in Education. 2007; 30 (3):287–305. [ Google Scholar ]
  • Hammersley, Martyn. 2013. What is qualitative research? Bloomsbury Publishing.
  • Hammersley M. What is ethnography? Can it survive should it? Ethnography and Education. 2018; 13 (1):1–17. [ Google Scholar ]
  • Hammersley M, Atkinson P. Ethnography . Principles in practice . London: Tavistock Publications; 2007. [ Google Scholar ]
  • Heidegger M. Sein und Zeit. Tübingen: Max Niemeyer Verlag; 2001. [ Google Scholar ]
  • Heidegger, Martin. 1988. 1923. Ontologie. Hermeneutik der Faktizität, Gesamtausgabe II. Abteilung: Vorlesungen 1919-1944, Band 63, Frankfurt am Main: Vittorio Klostermann.
  • Hempel CG. Philosophy of the natural sciences. Upper Saddle River: Prentice Hall; 1966. [ Google Scholar ]
  • Hood JC. Teaching against the text. The case of qualitative methods. Teaching Sociology. 2006; 34 (3):207–223. [ Google Scholar ]
  • James W. Pragmatism. New York: Meredian Books; 1907. [ Google Scholar ]
  • Jovanović G. Toward a social history of qualitative research. History of the Human Sciences. 2011; 24 (2):1–27. [ Google Scholar ]
  • Kalof L, Dan A, Dietz T. Essentials of social research. London: Open University Press; 2008. [ Google Scholar ]
  • Katz J. Situational evidence: Strategies for causal reasoning from observational field notes. Sociological Methods & Research. 2015; 44 (1):108–144. [ Google Scholar ]
  • King G, Keohane RO, Sidney S, Verba S. Scientific inference in qualitative research. Princeton: Princeton University Press; 1994. Designing social inquiry. [ Google Scholar ]
  • Lamont M. Evaluating qualitative research: Some empirical findings and an agenda. In: Lamont M, White P, editors. Report from workshop on interdisciplinary standards for systematic qualitative research. Washington, DC: National Science Foundation; 2004. pp. 91–95. [ Google Scholar ]
  • Lamont M, Swidler A. Methodological pluralism and the possibilities and limits of interviewing. Qualitative Sociology. 2014; 37 (2):153–171. [ Google Scholar ]
  • Lazarsfeld P, Barton A. Some functions of qualitative analysis in social research. In: Kendall P, editor. The varied sociology of Paul Lazarsfeld. New York: Columbia University Press; 1982. pp. 239–285. [ Google Scholar ]
  • Lichterman, Paul, and Isaac Reed I (2014), Theory and Contrastive Explanation in Ethnography. Sociological methods and research. Prepublished 27 October 2014; 10.1177/0049124114554458.
  • Lofland J, Lofland L. Analyzing social settings. A guide to qualitative observation and analysis. 3. Belmont: Wadsworth; 1995. [ Google Scholar ]
  • Lofland J, Snow DA, Anderson L, Lofland LH. Analyzing social settings. A guide to qualitative observation and analysis. 4. Belmont: Wadsworth/Thomson Learning; 2006. [ Google Scholar ]
  • Long AF, Godfrey M. An evaluation tool to assess the quality of qualitative research studies. International Journal of Social Research Methodology. 2004; 7 (2):181–196. [ Google Scholar ]
  • Lundberg G. Social research: A study in methods of gathering data. New York: Longmans, Green and Co.; 1951. [ Google Scholar ]
  • Malinowski B. Argonauts of the Western Pacific: An account of native Enterprise and adventure in the archipelagoes of Melanesian New Guinea. London: Routledge; 1922. [ Google Scholar ]
  • Manicas P. A realist philosophy of science: Explanation and understanding. Cambridge: Cambridge University Press; 2006. [ Google Scholar ]
  • Marchel C, Owens S. Qualitative research in psychology. Could William James get a job? History of Psychology. 2007; 10 (4):301–324. [ PubMed ] [ Google Scholar ]
  • McIntyre LJ. Need to know. Social science research methods. Boston: McGraw-Hill; 2005. [ Google Scholar ]
  • Merton RK, Barber E. The travels and adventures of serendipity . A Study in Sociological Semantics and the Sociology of Science. Princeton: Princeton University Press; 2004. [ Google Scholar ]
  • Mannay D, Morgan M. Doing ethnography or applying a qualitative technique? Reflections from the ‘waiting field‘ Qualitative Research. 2015; 15 (2):166–182. [ Google Scholar ]
  • Neuman LW. Basics of social research. Qualitative and quantitative approaches. 2. Boston: Pearson Education; 2007. [ Google Scholar ]
  • Ragin CC. Constructing social research. The unity and diversity of method. Thousand Oaks: Pine Forge Press; 1994. [ Google Scholar ]
  • Ragin, Charles C. 2004. Introduction to session 1: Defining qualitative research. In Workshop on Scientific Foundations of Qualitative Research , 22, ed. Charles C. Ragin, Joane Nagel, Patricia White. http://www.nsf.gov/pubs/2004/nsf04219/nsf04219.pdf
  • Rawls, Anne. 2018. The Wartime narrative in US sociology, 1940–7: Stigmatizing qualitative sociology in the name of ‘science,’ European Journal of Social Theory (Online first).
  • Schütz A. Collected papers I: The problem of social reality. The Hague: Nijhoff; 1962. [ Google Scholar ]
  • Seiffert H. Einführung in die Hermeneutik. Tübingen: Franke; 1992. [ Google Scholar ]
  • Silverman D. Doing qualitative research. A practical handbook. 2. London: SAGE Publications; 2005. [ Google Scholar ]
  • Silverman D. A very short, fairly interesting and reasonably cheap book about qualitative research. London: SAGE Publications; 2009. [ Google Scholar ]
  • Silverman D. What counts as qualitative research? Some cautionary comments. Qualitative Sociology Review. 2013; 9 (2):48–55. [ Google Scholar ]
  • Small ML. “How many cases do I need?” on science and the logic of case selection in field-based research. Ethnography. 2009; 10 (1):5–38. [ Google Scholar ]
  • Small, Mario L 2008. Lost in translation: How not to make qualitative research more scientific. In Workshop on interdisciplinary standards for systematic qualitative research, ed in Michelle Lamont, and Patricia White, 165–171. Washington, DC: National Science Foundation.
  • Snow DA, Anderson L. Down on their luck: A study of homeless street people. Berkeley: University of California Press; 1993. [ Google Scholar ]
  • Snow DA, Morrill C. New ethnographies: Review symposium: A revolutionary handbook or a handbook for revolution? Journal of Contemporary Ethnography. 1995; 24 (3):341–349. [ Google Scholar ]
  • Strauss AL. Qualitative analysis for social scientists. 14. Chicago: Cambridge University Press; 2003. [ Google Scholar ]
  • Strauss AL, Corbin JM. Basics of qualitative research. Techniques and procedures for developing grounded theory. 2. Thousand Oaks: Sage Publications; 1998. [ Google Scholar ]
  • Swedberg, Richard. 2017. Theorizing in sociological research: A new perspective, a new departure? Annual Review of Sociology 43: 189–206.
  • Swedberg R. The new 'Battle of Methods'. Challenge January–February. 1990; 3 (1):33–38. [ Google Scholar ]
  • Timmermans S, Tavory I. Theory construction in qualitative research: From grounded theory to abductive analysis. Sociological Theory. 2012; 30 (3):167–186. [ Google Scholar ]
  • Trier-Bieniek A. Framing the telephone interview as a participant-centred tool for qualitative research. A methodological discussion. Qualitative Research. 2012; 12 (6):630–644. [ Google Scholar ]
  • Valsiner J. Data as representations. Contextualizing qualitative and quantitative research strategies. Social Science Information. 2000; 39 (1):99–113. [ Google Scholar ]
  • Weber, Max. 1904. 1949. Objectivity’ in social Science and social policy. Ed. Edward A. Shils and Henry A. Finch, 49–112. New York: The Free Press.

IMAGES

  1. Qualitative Research

    explain the importance of qualitative research findings in decision making

  2. Understanding Qualitative Research: An In-Depth Study Guide

    explain the importance of qualitative research findings in decision making

  3. Qualitative vs Quantitative Research: What's the Difference?

    explain the importance of qualitative research findings in decision making

  4. Qualitative Research: Definition, Types, Methods and Examples

    explain the importance of qualitative research findings in decision making

  5. Qualitative Research Methods: An Introduction

    explain the importance of qualitative research findings in decision making

  6. Qualitative Research: Definition, Types, Methods and Examples (2022)

    explain the importance of qualitative research findings in decision making

VIDEO

  1. 2023 PhD Research Methods: Qualitative Research and PhD Journey

  2. Understanding Qualitative Analysis: Unlock the Meaning

  3. Quantitative Techniques

  4. Why the Discussion Chapter in Qualitative Research is Your Chance to Shine

  5. BSN || Research || Types of qualitative research design #research #research_design #qualitative

  6. Writing Up Qualitative Research Findings

COMMENTS

  1. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  2. Qualitative Methods in Health Care Research

    Significance of Qualitative Research. The qualitative method of inquiry examines the 'how' and 'why' of decision making, rather than the 'when,' 'what,' and 'where.'[] Unlike quantitative methods, the objective of qualitative inquiry is to explore, narrate, and explain the phenomena and make sense of the complex reality.Health interventions, explanatory health models, and medical-social ...

  3. Using Qualitative Evidence in Decision Making for Health and Social

    The lack of an established approach is an important constraint to incorporating qualitative evidence on the acceptability and feasibility of health interventions into tools to support decision making, including the GRADE Evidence to Decision frameworks . This paper describes a new approach for assessing how much confidence to place in findings ...

  4. Qualitative Study

    Qualitative research is a type of research that explores and provides deeper insights into real-world problems.[1] Instead of collecting numerical data points or intervene or introduce treatments just like in quantitative research, qualitative research helps generate hypotheses as well as further investigate and understand quantitative data. Qualitative research gathers participants ...

  5. Qualitative Research in Healthcare: Necessity and Characteristics

    Qualitative research is conducted in the following order: (1) selection of a research topic and question, (2) selection of a theoretical framework and methods, (3) literature analysis, (4) selection of the research participants and data collection methods, (5) data analysis and description of findings, and (6) research validation.

  6. New series published to support the use of qualitative research in

    Qualitative research aims to explore people's needs, values, perceptions and experiences of the world around them, including of health, illness, healthcare services, and more broadly of social systems and their policies and processes. Qualitative evidence is therefore very important for improving understanding on how, and whether, people perceive health interventions to be effective and ...

  7. Are we entering a new era for qualitative research? Using qualitative

    Here we focus here on how new developments in the field of qualitative research are creating important opportunities for using qualitative evidence, including findings from syntheses of qualitative evidence, to inform health policy and systems decisions. ... Using qualitative evidence in decision making for health and social interventions: an ...

  8. Qualitative Research: An Overview

    Qualitative research is a 'big tent' that encompasses various schools of thoughts. There is a general consensus that qualitative research is best used to answer why and howresearch questions, but not how much or to what extent questions. The word 'how can Footnote 5 ' is also frequently used in the research question of a qualitative research; this typically requires open-ended vs ...

  9. PDF A Guide to Qualitative Research

    qualitative research (1). This will allow you to better describe and explore the phenomenon, enabling you to define the questions you want to ask in a more focused piece of research. If sufficient qualitative research exists to understand and explain the phenomenon, but you do not know the

  10. Characteristics of Qualitative Research

    Qualitative research is a method of inquiry used in various disciplines, including social sciences, education, and health, to explore and understand human behavior, experiences, and social phenomena. It focuses on collecting non-numerical data, such as words, images, or objects, to gain in-depth insights into people's thoughts, feelings, motivations, and perspectives.

  11. Qualitative evidence to improve guidelines and health decision-making

    New guidance on how to apply the GRADE-CERQual approach is now available as a special series of articles to support stakeholders conducting reviews of qualitative research and using their findings to inform decision-making. 8 The articles explain the approach step-by-step and why and how the approach was developed. This guidance also provides ...

  12. What counts? The critical role of qualitative data in teachers

    1. Data-based decision making in schools: the critical role of unsystematically collected evidence. Education practitioners who base their instructional moves on empirical, systematically collected data, it has been suggested, will more effectively improve outcomes for their students (Marsh, Pane, & Hamilton, 2006).For more than a decade, research on data use in schools has begun with the ...

  13. Researching Social Influences on Decision Making: the Case for

    Key words: decision-making; qualitative research; economic models. Introduction This paper attempts to make the case for using qualitative methods in the decision-making research. Its objective is to explore the strengths and weaknesses of qualitative ... was important to only one participant) this study could be a positive example of one that

  14. The Central Role of Theory in Qualitative Research

    The argument could be made, however, that most qualitative research inherently shapes or modifies existing theory in that (1) data are analyzed and interpreted in light of the concepts of a particular theoretical orientation, and (2) a study's findings are almost always discussed in relation to existing knowledge (some of which is theory ...

  15. Interpreting Findings in Qualitative Research: Striking the Balance

    This blog post delves into the art of interpreting findings, offering guidance on making sense of coded data, and drawing conclusions that balance the researcher's insights with the perspectives of the participants. THE COMPLEXITY OF INTERPRETATION. Interpreting findings in qualitative research can be both exhilarating and challenging.

  16. Improving Qualitative Research Findings Presentations:

    The qualitative research findings presentation, as a distinct genre, conventionally shares particular facets of genre entwined and contextualized in method and scholarly discourse. Despite the commonality and centrality of these presentations, little is known of the quality of current presentations of qualitative research findings.

  17. The pillars of trustworthiness in qualitative research

    Trustworthy qualitative research findings are also important for informing policy decisions and improving the provision of services in various fields. While qualitative research has limitations, such as subjectivity and resource constraints, the methodical application of measures to ensure trustworthiness greatly enhances the precision and ...

  18. The Importance of Research in Decision-Making: Examples from Different

    examples from the business world show how important research is in all decision-making processes. It provides data-driven insights that reduce uncertainty, manage risks, and open the door to well-informed, strategic decisions. Whatever the field, thorough research is an essential component of success. It aids businesses in navigating the ...

  19. Using Qualitative Evidence in Decision Making for Health and Social

    The lack of an established approach is an important constraint to incorporating qualitative evidence on the acceptability and feasibility of health interventions into tools to support decision making, including the GRADE Evidence to Decision frameworks . This paper describes a new approach for assessing how much confidence to place in findings ...

  20. The Importance Of Integrating Narrative Into Health Care Decision Making

    NIH R01 CA152195/CA/NCI NIH HHS/United States. When making health care decisions, patients and consumers use data but also gather stories from family and friends. When advising patients, clinicians consult the medical evidence but also use professional judgment. These stories and judgments, as well as other forms of narrative, shape decision ...

  21. Qualitative Research: Getting Started

    Qualitative research methodology is not a single method, but instead offers a variety of different choices to researchers, according to specific parameters of topic, research question, participants, and settings. The method is the way you carry out your research within the paradigm of quantitative or qualitative research.

  22. What counts? The critical role of qualitative data in teachers ...

    The push for data-based decision making in schools has largely centered on the use of quantitative data to inform technical-rational processes of teachers' decision making. Previous attention to teachers' reliance on qualitative data - particularly unsystematically collected qualitative data - tends to focus on their use of intuition and is ...

  23. What is Qualitative in Qualitative Research

    Qualitative research is multimethod in focus, involving an interpretative, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them.