Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
- Knowledge Base
- What Is Qualitative Research? | Methods & Examples
What Is Qualitative Research? | Methods & Examples
Published on June 19, 2020 by Pritha Bhandari . Revised on June 22, 2023.
Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.
Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.
Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.
- How does social media shape body image in teenagers?
- How do children and adults interpret healthy eating in the UK?
- What factors influence employee retention in a large organization?
- How is anxiety experienced around the world?
- How can teachers integrate social issues into science curriculums?
Table of contents
Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.
Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.
Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.
Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.
The only proofreading tool specialized in correcting academic writing
The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. It's the most accurate and reliable proofreading tool for students.
Correct my document
Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:
- Observations: recording what you have seen, heard, or encountered in detailed field notes.
- Interviews: personally asking people questions in one-on-one conversations.
- Focus groups: asking questions and generating discussion among a group of people.
- Surveys : distributing questionnaires with open-ended questions.
- Secondary research: collecting existing data in the form of texts, images, audio or video recordings, etc.
- You take field notes with observations and reflect on your own experiences of the company culture.
- You distribute open-ended surveys to employees across all the company’s offices by email to find out if the culture varies across locations.
- You conduct in-depth interviews with employees in your office to learn about their experiences and perspectives in greater detail.
Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.
For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.
Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.
Most types of qualitative data analysis share the same five steps:
- Prepare and organize your data. This may mean transcribing interviews or typing up fieldnotes.
- Review and explore your data. Examine the data for patterns or repeated ideas that emerge.
- Develop a data coding system. Based on your initial ideas, establish a set of codes that you can apply to categorize your data.
- Assign codes to the data. For example, in qualitative survey analysis, this may mean going through each participant’s responses and tagging them with codes in a spreadsheet. As you go through your data, you can create new codes to add to your system if necessary.
- Identify recurring themes. Link codes together into cohesive, overarching themes.
There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.
Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:
The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.
- Natural settings
Data collection occurs in real-world contexts or in naturalistic ways.
- Meaningful insights
Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.
- Generation of new ideas
Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.
Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:
The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.
Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.
- Limited generalizability
Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .
Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
- Chi square goodness of fit test
- Degrees of freedom
- Null hypothesis
- Discourse analysis
- Control groups
- Mixed methods research
- Non-probability sampling
- Quantitative research
- Inclusion and exclusion criteria
- Rosenthal effect
- Implicit bias
- Cognitive bias
- Selection bias
- Negativity bias
- Status quo bias
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
There are five common approaches to qualitative research :
- Grounded theory involves collecting data in order to develop new theories.
- Ethnography involves immersing yourself in a group or organization to understand its culture.
- Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
- Phenomenological research involves investigating phenomena through people’s lived experiences.
- Action research links theory and practice in several cycles to drive innovative changes.
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.
There are various approaches to qualitative data analysis , but they all share five steps in common:
- Prepare and organize your data.
- Review and explore your data.
- Develop a data coding system.
- Assign codes to the data.
- Identify recurring themes.
The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Bhandari, P. (2023, June 22). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved January 2, 2024, from https://www.scribbr.com/methodology/qualitative-research/
Is this article helpful?
Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, what is your plagiarism score.
The Importance of Qualitative Measurement in Driving Social Good
Qualitative measurement is a broad and complicated field of approach necessary to determine the success of a social impact endeavor. To help ensure that your initiative is prosperous, we’ve compiled an overview of measuring qualitative data (and how it differs from quantitative data) and background on the many forms of qualitative data, techniques, and methodologies. Before embarking on qualitative measurement , here are a few key concepts to consider.
What is Qualitative Measurement?
Data acquired through a qualitative measure is a type of information that describes traits or characteristics. It's gathered through surveys, interviews, or observation, and it's usually presented as a story. The qualitative data might be in the form of descriptive words that can be analyzed for patterns or significance using coding. Coding helps the researcher categorize qualitative data and do quantitative analysis by identifying themes related to the research objectives.
How to Collect Qualitative Data
Foundations can collect data for qualitative research in a variety of methods. RCTs (randomized controlled trials) have long been considered the gold standard for measuring effect by government agencies.
In-depth interviews, in which a researcher asks questions of a person or group touched by a topic, are one form of qualitative assessment. These are determined by the researcher's questions and the study's viability in terms of time and financial resources.
Qualitative researchers often use direct observation to obtain data. This technique aids investigators in studying phenomena in precise settings as they occur in real life. Researchers can also use written resources to do qualitative research, such as books, periodicals, newspapers, and transcripts.
Qualitative vs. Quantitative Measurements
Before comparing the differences, let’s first layout both the quantitative and qualitative measures definition :
What are Quantitative Measures?
Numbers and graphs are used in quantitative research. It's utilized to put ideas and assumptions to the test or validate them. This sort of study may be used by businesses to develop generalizable facts about a subject. Experiments, observations recorded as numbers, and surveys with closed-ended questions are all common quantitative approaches.
What are Qualitative Measures?
Qualitative measurements are written down for deciphering ideas, thoughts, and experiences. This sort of study allows you to learn more about issues that aren't well-understood. Interviews with open-ended inquiries, observations recorded in words, and literature reviews that investigate concepts and theories are all common qualitative approaches.
When Should You Use Qualitative vs. Quantitative Measurements
When determining whether to utilize qualitative or quantitative data, a good rule of thumb is:
- If you want to confirm or test a theory or hypothesis, use quantitative research.
- If you want to learn more about concepts, thoughts, experiences, conduct qualitative measurement .
You can use a qualitative, quantitative, or mixed-methods approach to most research issues. Which kind you pick is determined by several factors, including:
- Whether you're conducting inductive or deductive research
- Your research questions
- Whether you're conducting experimental, correlational, or descriptive research
Time, money, data availability, and responders' access are all practical issues that researchers should consider.
Although quantitative approaches generate data that can be pooled and analyzed to characterize and forecast correlations, a qualitative measurement may help explore and explain those relationships as well as contextual variations in their quality.
Qualitative research may use social analytical frameworks to understand observable patterns and trends, including the study of socially differentiated outcomes, and analyze poverty as a dynamic process rather than a static outcome.
How to Measure Qualitative Data
Results must be measured, but not every result can be tallied, recorded, or neatly fit into a framework. Some qualitative outcomes are intangible, such as "empowerment," "confidence," or "capacity." It doesn't imply you can't quantify these things just because they're tough to count. You'll need to measure them in various ways, including qualitative and mixed techniques.
Define the objective
You must first identify what you’re querying before deciding on a technique. What does "empowerment" mean in your program, for example, if it strives to empower women? Does this imply that women have some power over domestic decisions? Does this indicate that they attend community meetings? Or that they have the capability to leave a situation that makes them uncomfortable?
Select a method
To assess qualitative outcomes, you can employ a variety of techniques. Only a few are mentioned here. Combining multiple approaches to obtain diverse viewpoints might be beneficial. It would be ideal if you could alter or change techniques to fit your program's needs.
Interviews and surveys
Qualitative measurement examples are typically generated through the use of interviews and focus groups to meet with recipients and stakeholders directly to discuss their experiences and the program's results for qualitative approaches. Consider what you're measuring and whether a different approach might provide you with more valuable data.
Journals and logbooks
People in charge of a program, participants, and stakeholders can benefit from diaries, logs, and journals. Participants might also be given a journal to keep track of their experiences and ideas. Examining their diary entries may reveal if the training has influenced their thoughts or behaviors.
Photographs and art
Pictures and photographs make for excellent qualitative measures examples and may be used to assess quality, as many people find it simpler to convey changes graphically. Vulnerable youth, for example, can be encouraged to make a painting depicting their lives before and after joining the program.
The Best Ways to Communicate Qualitative Data
Here are some methods foundations can employ to communicate the data they've acquired from qualitative data:
Create separate outputs for each target audience
You may wish to reach out to various people, including the general public, legislators, and specialists in a specific sector. Consider producing a series of brief outputs for each of them. A summary prepared for a lay readership may lack the degree of information that a government body requires. You may also wish to highlight facts that are of particular relevance to specific readers.
Link to current events
Qualitative measurement frequently takes a highly in-depth approach to a single research issue but may also interact with broader but related themes. Consider concerns that aren't only part of a short-term media cycle but rather longer-term trends that are likely to resurface, such as housing prices or obesity, whenever feasible. It's not essential to twist your key results to make them fit; simply create a meaningful link.
Create a compelling narrative
Clearly communicating a story is the key to effectively communicating qualitative data. Individuals are more engaged by stories about people than they are by cold facts and numbers. Give your stories context and causality (this occurred to this person as a result of this), and you'll be following the same fundamental guidelines for excellent storytelling that screenwriters and novelists do.
Describe the method
When presenting qualitative data, consider that many people are unfamiliar with qualitative research and the methodologies you may have employed. Provide a brief description of your research methodology. If the reader is intrigued, provide them a means to learn more about it elsewhere, like a publication or a project website. It's preferable to tease the reader and make them desire more than to offer too much information up front.
Challenges of Qualitative Measurement
The method of acquiring and analyzing qualitative measurement regarding a social impact assessment can be a challenging undertaking. Coupled with this predicament, qualitative measurements require a thorough, in-depth study usually necessitates expensive and time-consuming cost-benefit analysis.
At UpMetrics, we have made strides to mitigate these concerns. We think that by combining qualitative and quantitative data in one place, organizations can create a comprehensive narrative that details their effect in a compelling way that motivates action.
Contact us to learn how UpMetrics' impact analytics platform can benefit you and start collecting qualitative and relevant data to help you understand and express your impact with greater clarity.
Sign up for our Newsletter!
Get the latest impact UPdates delivered right to your inbox! Let us know what industry you're in so we can share content that's most relevant to you!
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Account settings
- Advanced Search
- Journal List
- Neurol Res Pract
How to use and assess qualitative research methods
1 Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
2 Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany
This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.
The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.
What is qualitative research?
Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].
Why conduct qualitative research?
Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.
While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 – 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].
Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 – 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.
How to conduct qualitative research?
Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig. 1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.
Iterative research process
While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].
The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].
Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.
Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].
Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].
Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.
Choosing the “right” method
As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.
Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig. 2 .
Possible combination of data collection methods
Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project
The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].
To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig. 3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].
From data collection to data analysis
Attributions for icons: see Fig. Fig.2, 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project
How to report qualitative research?
Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].
How to combine qualitative with quantitative research?
Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 – 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig. 4 .
Three common mixed methods designs
In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.
How to assess qualitative research?
A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.
Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].
While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].
Sampling and saturation
The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].
This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).
Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].
Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.
Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.
Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].
In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 – 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.
How not to assess qualitative research
The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.
Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.
For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 – 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.
While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.
Interrater reliability, variability and other “objectivity checks”
The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].
Not being quantitative research
Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.
The main take-away points of this paper are summarised in Table Table1. 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.
Abbreviations, authors’ contributions.
LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.
no external funding.
Availability of data and materials
Ethics approval and consent to participate, consent for publication, competing interests.
The authors declare no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
- UNC Libraries
- HSL Subject Research
- Qualitative Research Resources
- Assessing Qualitative Research
Qualitative Research Resources: Assessing Qualitative Research
Created by health science librarians.
- What is Qualitative Research?
- Qualitative Research Basics
- Special Topics
- Training Opportunities: UNC & Beyond
- Help at UNC
- Qualitative Software for Coding/Analysis
- Software for Audio, Video, Online Surveys
- Finding Qualitative Studies
About this Page
Legend (let evidence guide every new decision) assessment tools: cincinnati children's hospital, equator network: enhancing the quality and transparency of health research, other tools for assessing qualitative research.
- Writing Up Your Research
- Integrating Qualitative Research into Systematic Reviews
- Publishing Qualitative Research
- Presenting Qualitative Research
- Qualitative & Libraries: a few gems
- Data Repositories
Why is this information important?
- Qualitative research typically focuses on collecting very detailed information on a few cases and often addresses meaning, rather than objectively identifiable factors.
- This means that typical markers of research quality for quantitative studies, such as validity and reliability, cannot be used to assess qualitative research.
On this page you'll find:
The resources on this page will guide you to some of the alternative measures/tools you can use to assess qualitative research.
Evidence Evaluation Tools and Resources
This website has a number of resources for evaluating health sciences research across a variety of designs/study types, including an Evidence Appraisal form for qualitative research (in table), as well as forms for mixed methods studies from a variety of clinical question domains. The site includes information on the following:
- Evaluating the Evidence Algorithm (pdf download)
- Evidence Appraisal Forms ( see Domain of Clinical Questions Table )
- Table of Evidence Levels (pdf download)
- Grading a Body of Evidence (pdf download)
- Judging the Strength of a Recommendation (pdf download)
- LEGEND Glossary (pdf download)
- EQUATOR: Qualitative Research Reporting Guidelines
- EQUATOR Network Home
The EQUATOR Network is an ‘umbrella’ organisation that brings together researchers, medical journal editors, peer reviewers, developers of reporting guidelines, research funding bodies and other collaborators with mutual interest in improving the quality of research publications and of research itself.
The EQUATOR Library contains a comprehensive searchable database of reporting guidelines for many study types--including qualitative--and also links to other resources relevant to research reporting:
- Library for health research reporting: provides an up-to-date collection of guidelines and policy documents related to health research reporting. These are aimed mainly at authors of research articles, journal editors, peer reviewers and reporting guideline developers.
- Toolkits to support writing research, using guidelines, teaching research skills, selecting the appropriate reporting guideline
- Courses and events
- Librarian Network
Also see Articles box, below, some of which contain checklists or tools.
Most checklists or tools are meant to help you think critically and systematically when appraising research. Users should generally consult accompanying materials such as manuals, handbooks, and cited literature to use these tools appropriately. Broad understanding of the variety and complexity of qualitative research is generally necessary, along with an understanding of the philosophical perspectives plus knowledge about specific qualitative research methods and their implementation.
- CASP/Critical Assessment Skills Programme Tool for Evaluating Qualitative Research 2018
- CASP Knowledge Hub Includes critical appraisal checklists for key study designs; glossary of key research terms; key links related to evidence based healthcare, statistics, and research; a bibliography of articles and research papers about CASP and other critical appraisal tools and approaches 1993-2012.
- JBI (Joanna Briggs Institute) Manual for Evidence Synthesis (2020) See the following chapters: Chapter 2: Systematic reviews of qualitative evidence. Includes appendices: • Appendix 2.1: JBI Critical Appraisal Checklist for Qualitative Research • Appendix 2.2: Discussion of JBI Qualitative critical appraisal criteria • Appendix 2.3 JBI Qualitative data extraction tool Chapter 8: Mixed methods systematic reviews more... less... Aromataris E, Munn Z (Editors). JBI Manual for Evidence Synthesis. JBI, 2020. Available from https://synthesismanual.jbi.global. https://doi.org/10.46658/JBIMES-20-01
- McGill Mixed Methods Appraisal Tool (MMAT) Front Page Public wiki site for the MMAT: The MMAT is intended to be used as a checklist for concomitantly appraising and/or describing studies included in systematic mixed studies reviews (reviews including original qualitative, quantitative and mixed methods studies). The MMAT was first published in 2009. Since then, it has been validated in several studies testing its interrater reliability, usability and content validity. The latest version of the MMAT was updated in 2018.
- McGill Mixed Methods Appraisal Tool (MMAT) 2018 User Guide See full site (public wiki link above) for additional information, including FAQ's, references and resources, earlier versions, and more.
- McMaster University Critical Review Form & Guidelines for Qualitative Studies v2.0 Includes links to Qualitative Review Form (v2.0) and accompanying Guidelines from the Evidence Based Practice Research Group of McMaster University's School of Rehabilitation Science). Links are also provided for Spanish, German, and French versions.
- NICE Quality Appraisal Checklist-Qualitative Studies, 3rd ed, 2012, from UK National Institute for Health and Care Excellence Includes checklist and notes on its use. From Methods for the Development of NICE Public Health Guidance, 3rd edition. more... less... Produced by the National Institute for Health and Clinical Excellence © Copyright National Institute for Health and Clinical Excellence, 2006 (updated 2012). All rights reserved. This material may be freely reproduced for educational and not-for-profit purposes. No reproduction by or for commercial organisations, or for commercial purposes, is allowed without the express written permission of the Institute.
- NICE Quality Appraisal Checklist-Qualitative Studies, 3rd ed. (.pdf download) Appendix H Checklist and Notes download. © Copyright National Institute for Health and Clinical Excellence, 2006 (updated 2012). All rights reserved. This material may be freely reproduced for educational and not-for-profit purposes. No reproduction by or for commercial organisations, or for commercial purposes, is allowed without the express written permission of the Institute.
- Qualitative Research Review Guidelines, RATS
- SBU Swedish Agency for Health Technology Assessment and Assessment of Social Services Evaluation and synthesis of studies using qualitative methods of analysis, 2016. Appendix 2 of this document (at the end) contains a checklist for evaluating qualitative research. more... less... SBU. Evaluation and synthesis of studies using qualitative methods of analysis. Stockholm: Swedish Agency for Health Technology Assessment and Assessment of Social Services (SBU); 2016.
- Users' Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice, 3rd ed (JAMA Evidence) Chapter 13.5 Qualitative Research
- Slides: Appraising Qualitative Research from Users' Guide to the Medical Literature, 3rd edition Click on the 'Related Content' tab to find the link to download the Appraising Qualitative Research slides.
These articles address a range of issues related to understanding and evaluating qualitative research; some include checklists or tools.
Clissett, P. (2008) "Evaluating Qualitative Research." Journal of Orthopaedic Nursing 12: 99-105.
Cohen, Deborah J. and Benjamin F. Crabtree. (2008) "Evidence for Qualitative Research in Health Care: Controversies and Recommendations." Annals of Family Medicine 6(4): 331-339.
- Supplemental Appendix 1. Search Strategy for Criteria for Qualitative Research in Health Care
- Supplemental Appendix 2. Publications Analyzed: Health Care Journals and Frequently Referenced Books and Book Chapters (1980-2005) That Posited Criteria for "Good" Qualitative Research.
Dixon-Woods, M., R.L. Shaw, S. Agarwal, and J.A. Smith. (2004) "The Problem of Appraising Qualitative Research." Qual Safe Health Care 13: 223-225.
Fossey, E., C. Harvey, F. McDermott, and L. Davidson. (2002) "Understanding and Evaluating Qualitative Research." Australian and New Zealand Journal of Psychiatry 36(6): 717-732.
Hammarberg, K., M. Kirkman, S. de Lacey. (2016) "Qualitative Research Methods: When to Use and How to Judge them." Human Reproduction 31 (3): 498-501.
Lee, J. (2014) "Genre-Appropriate Judgments of Qualitative Research." Philosophy of the Social Sciences 44(3): 316-348. (This provides 3 strategies for evaluating qualitative research, 2 that the author is not crazy about and one that he considers more appropriate/accurate).
Majid, Umair and Vanstone,Meredith (2018). "Appraising Qualitative Research for Evidence Syntheses: A Compendium of Quality Appraisal Tools." Qualitative Health Research 28(13): 2115-2131. PMID: 30047306 DOI: 10.1177/1049732318785358
Meyrick, Jane. (2006) "What is Good Qualitative Research? A First Step towards a Comprehensive Approach to Judging Rigour/Quality." Journal of Health Psychology 11(5): 799-808.
Miles, MB, AM Huberman, J Saldana. (2014) Qualitative Data Analysis. Thousand Oaks, Califorinia, SAGE Publications, Inc. Chapter 11: Drawing and Verifying Conclusions . Check Availability of Print Book .
Morse, JM. (1997) "Perfectly Healthy but Dead:"The Myth of Inter-Rater Reliability. Qualitative Health Research 7(4): 445-447.
O’Brien BC, Harris IB, Beckman TJ, et al. (2014) Standards for reporting qualitative research: a synthesis of recommendations . Acad Med 89(9):1245–1251. DOI: 10.1097/ACM.0000000000000388 PMID: 24979285
The Standards for Reporting Qualitative Research (SRQR) consists of 21 items. The authors define and explain key elements of each item and provide examples from recently published articles to illustrate ways in which the standards can be met. The SRQR aims to improve the transparency of all aspects of qualitative research by providing clear standards for reporting qualitative research. These standards will assist authors during manuscript preparation, editors and reviewers in evaluating a manuscript for potential publication, and readers when critically appraising, applying, and synthesizing study findings.
Ryan, Frances, Michael Coughlin, and Patricia Cronin. (2007) "Step by Step Guide to Critiquing Research: Part 2, Qualitative Research." British Journal of Nursing 16(12): 738-744.
Stige, B, K. Malterud, and T. Midtgarden. (2009) "Toward an Agenda for Evaluation of Qualitative Research." Qualitative Health Research 19(10): 1504-1516.
Tong, Allison and Mary Amanda Dew. (2016-EPub ahead of print). "Qualitative Research in Transplantation: Ensuring Relevance and Rigor. " Transplantation
Allison Tong, Peter Sainsbury, Jonathan Craig; Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups , International Journal for Quality in Health Care , Volume 19, Issue 6, 1 December 2007, Pages 349–357, https://doi.org/10.1093/intqhc/mzm042
The criteria included in COREQ, a 32-item checklist, can help researchers to report important aspects of the research team, study methods, context of the study, findings, analysis and interpretations. Items most frequently included in the checklists related to sampling method, setting for data collection, method of data collection, respondent validation of findings, method of recording data, description of the derivation of themes and inclusion of supporting quotations. We grouped all items into three domains: (i) research team and reflexivity, (ii) study design and (iii) data analysis and reporting.
Tracy, Sarah (2010) “Qualitative Quality: Eight ‘Big-Tent’ Criteria for Excellent Qualitative Research.” Qualitative Inquiry 16(10):837-51
- Critical Appraisal Skills Programme
- IMPSCI (Implementation Science) Tutorials
- Johns Hopkins: Why Mixed Methods?
- Measuring, Learning, and Evaluation Project for the Urban Reproductive Health Initiative This project ran 2010-2015. Some project resources are still available.
- NIH OBSSR (Office of Behavioral & Social Sciences Research) Best Practices for Mixed Methods Research in Health Sciences, 2011 The OBSSR commissioned a team in 2010 to develop a resource that would provide guidance to NIH investigators on how to rigorously develop and evaluate mixed methods research applications. more... less... John W. Creswell, Ph.D., University of Nebraska-Lincoln Ann Carroll Klassen, Ph.D., Drexel University Vicki L. Plano Clark, Ph.D., University of Nebraska-Lincoln Katherine Clegg Smith, Ph.D., Johns Hopkins University With the Assistance of a Specially Appointed Working Group
- NIH OBSSR Qualitative Methods in Health Research Legacy Resource: The Office of Behavioral and Social Sciences Research/OBSSR sponsored a workshop in 1999 entitled Qualitative Methods in Health Research: Opportunities and Considerations in Application and Review. The workshop brought together 12 researchers who served on NIH review committees or had been successful in obtaining funding from NIH. more... less... Link not working on OBSSR website, here https://obssr.od.nih.gov/about-us/publications/ Formerly: https://obssr-archive.od.nih.gov/pdf/Qualitative.PDF
- NSF Workshop on Interdisciplinary Standards for Systematic Qualitative Research On May 19-20, 2005, a workshop on Interdisciplinary Standards for Systematic Qualitative Research was held at the National Science Foundation (NSF) in Arlington, Virginia. The workshop was cofunded by a grant from four NSF Programs—Cultural Anthropology, Law and Social Science, Political Science, and Sociology… It is well recognized that each of the four disciplines have different research design and evaluation cultures as well as considerable variability in the emphasis on interpretation and explanation, commitment to constructivist and positivist epistemologies, and the degree of perceived consensus about the value and prominence of qualitative research methods. more... less... Within this multidisciplinary and multimethods context, twenty-four scholars from the four disciplines were charged to (1) articulate the standards used in their particular field to ensure rigor across the range of qualitative methodological approaches;1* (2) identify common criteria shared across the four disciplines for designing and evaluating research proposals and fostering multidisciplinary collaborations; and (3) develop an agenda for strengthening the tools, training, data, research design, and infrastructure for research using qualitative approaches.
- Qualitative Research for Improved Programs
- Qualitative Research Methods: A Data Collector's Field Guide (2005) From FHI 360/Family Health International with support from US AID. Natasha Mack, Cynthia Woodsong, Kathleen M. MacQueen, Greg Guest, and Emily Name. The guide is divided into five modules covering the following topics: Module 1 – Qualitative Research Methods Overview Module 2 – Participant Observation Module 3 – In-Depth Interviews Module 4 – Focus Groups Module 5 – Data Documentation and Management
- Robert Wood Johnson Foundation Guidelines for Designing, Analyzing, and Reporting Qualitative Research
- Robert Wood Johnson Foundation: Qualitative Research Guidelines Project
- << Previous: Finding Qualitative Studies
- Next: Writing Up Your Research >>
- Last Updated: Dec 14, 2023 5:11 PM
- URL: https://guides.lib.unc.edu/qual
Search & Find
- E-Research by Discipline
- More Search & Find
Places & Spaces
- Places to Study
- Book a Study Room
- Printers, Scanners, & Computers
- More Places & Spaces
- Borrowing & Circulation
- Request a Title for Purchase
- Schedule Instruction Session
- More Services
Support & Guides
- Course Reserves
- Research Guides
- Citing & Writing
- More Support & Guides
- Mission Statement
- Diversity Statement
- Staff Directory
- Job Opportunities
- Give to the Libraries
- News & Exhibits
- Reckoning Initiative
- More About Us
- Search This Site
- Give Us Your Feedback
- 208 Raleigh Street CB #3916
- Chapel Hill, NC 27515-8890