• Privacy Policy

Buy Me a Coffee

Research Method

Home » Survey Research – Types, Methods, Examples

Survey Research – Types, Methods, Examples

Table of Contents

Survey Research

Survey Research

Definition:

Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

Survey research can be used to answer a variety of questions, including:

  • What are people’s opinions about a certain topic?
  • What are people’s experiences with a certain product or service?
  • What are people’s beliefs about a certain issue?

Survey Research Methods

Survey Research Methods are as follows:

  • Telephone surveys: A survey research method where questions are administered to respondents over the phone, often used in market research or political polling.
  • Face-to-face surveys: A survey research method where questions are administered to respondents in person, often used in social or health research.
  • Mail surveys: A survey research method where questionnaires are sent to respondents through mail, often used in customer satisfaction or opinion surveys.
  • Online surveys: A survey research method where questions are administered to respondents through online platforms, often used in market research or customer feedback.
  • Email surveys: A survey research method where questionnaires are sent to respondents through email, often used in customer satisfaction or opinion surveys.
  • Mixed-mode surveys: A survey research method that combines two or more survey modes, often used to increase response rates or reach diverse populations.
  • Computer-assisted surveys: A survey research method that uses computer technology to administer or collect survey data, often used in large-scale surveys or data collection.
  • Interactive voice response surveys: A survey research method where respondents answer questions through a touch-tone telephone system, often used in automated customer satisfaction or opinion surveys.
  • Mobile surveys: A survey research method where questions are administered to respondents through mobile devices, often used in market research or customer feedback.
  • Group-administered surveys: A survey research method where questions are administered to a group of respondents simultaneously, often used in education or training evaluation.
  • Web-intercept surveys: A survey research method where questions are administered to website visitors, often used in website or user experience research.
  • In-app surveys: A survey research method where questions are administered to users of a mobile application, often used in mobile app or user experience research.
  • Social media surveys: A survey research method where questions are administered to respondents through social media platforms, often used in social media or brand awareness research.
  • SMS surveys: A survey research method where questions are administered to respondents through text messaging, often used in customer feedback or opinion surveys.
  • IVR surveys: A survey research method where questions are administered to respondents through an interactive voice response system, often used in automated customer feedback or opinion surveys.
  • Mixed-method surveys: A survey research method that combines both qualitative and quantitative data collection methods, often used in exploratory or mixed-method research.
  • Drop-off surveys: A survey research method where respondents are provided with a survey questionnaire and asked to return it at a later time or through a designated drop-off location.
  • Intercept surveys: A survey research method where respondents are approached in public places and asked to participate in a survey, often used in market research or customer feedback.
  • Hybrid surveys: A survey research method that combines two or more survey modes, data sources, or research methods, often used in complex or multi-dimensional research questions.

Types of Survey Research

There are several types of survey research that can be used to collect data from a sample of individuals or groups. following are Types of Survey Research:

  • Cross-sectional survey: A type of survey research that gathers data from a sample of individuals at a specific point in time, providing a snapshot of the population being studied.
  • Longitudinal survey: A type of survey research that gathers data from the same sample of individuals over an extended period of time, allowing researchers to track changes or trends in the population being studied.
  • Panel survey: A type of longitudinal survey research that tracks the same sample of individuals over time, typically collecting data at multiple points in time.
  • Epidemiological survey: A type of survey research that studies the distribution and determinants of health and disease in a population, often used to identify risk factors and inform public health interventions.
  • Observational survey: A type of survey research that collects data through direct observation of individuals or groups, often used in behavioral or social research.
  • Correlational survey: A type of survey research that measures the degree of association or relationship between two or more variables, often used to identify patterns or trends in data.
  • Experimental survey: A type of survey research that involves manipulating one or more variables to observe the effect on an outcome, often used to test causal hypotheses.
  • Descriptive survey: A type of survey research that describes the characteristics or attributes of a population or phenomenon, often used in exploratory research or to summarize existing data.
  • Diagnostic survey: A type of survey research that assesses the current state or condition of an individual or system, often used in health or organizational research.
  • Explanatory survey: A type of survey research that seeks to explain or understand the causes or mechanisms behind a phenomenon, often used in social or psychological research.
  • Process evaluation survey: A type of survey research that measures the implementation and outcomes of a program or intervention, often used in program evaluation or quality improvement.
  • Impact evaluation survey: A type of survey research that assesses the effectiveness or impact of a program or intervention, often used to inform policy or decision-making.
  • Customer satisfaction survey: A type of survey research that measures the satisfaction or dissatisfaction of customers with a product, service, or experience, often used in marketing or customer service research.
  • Market research survey: A type of survey research that collects data on consumer preferences, behaviors, or attitudes, often used in market research or product development.
  • Public opinion survey: A type of survey research that measures the attitudes, beliefs, or opinions of a population on a specific issue or topic, often used in political or social research.
  • Behavioral survey: A type of survey research that measures actual behavior or actions of individuals, often used in health or social research.
  • Attitude survey: A type of survey research that measures the attitudes, beliefs, or opinions of individuals, often used in social or psychological research.
  • Opinion poll: A type of survey research that measures the opinions or preferences of a population on a specific issue or topic, often used in political or media research.
  • Ad hoc survey: A type of survey research that is conducted for a specific purpose or research question, often used in exploratory research or to answer a specific research question.

Types Based on Methodology

Based on Methodology Survey are divided into two Types:

Quantitative Survey Research

Qualitative survey research.

Quantitative survey research is a method of collecting numerical data from a sample of participants through the use of standardized surveys or questionnaires. The purpose of quantitative survey research is to gather empirical evidence that can be analyzed statistically to draw conclusions about a particular population or phenomenon.

In quantitative survey research, the questions are structured and pre-determined, often utilizing closed-ended questions, where participants are given a limited set of response options to choose from. This approach allows for efficient data collection and analysis, as well as the ability to generalize the findings to a larger population.

Quantitative survey research is often used in market research, social sciences, public health, and other fields where numerical data is needed to make informed decisions and recommendations.

Qualitative survey research is a method of collecting non-numerical data from a sample of participants through the use of open-ended questions or semi-structured interviews. The purpose of qualitative survey research is to gain a deeper understanding of the experiences, perceptions, and attitudes of participants towards a particular phenomenon or topic.

In qualitative survey research, the questions are open-ended, allowing participants to share their thoughts and experiences in their own words. This approach allows for a rich and nuanced understanding of the topic being studied, and can provide insights that are difficult to capture through quantitative methods alone.

Qualitative survey research is often used in social sciences, education, psychology, and other fields where a deeper understanding of human experiences and perceptions is needed to inform policy, practice, or theory.

Data Analysis Methods

There are several Survey Research Data Analysis Methods that researchers may use, including:

  • Descriptive statistics: This method is used to summarize and describe the basic features of the survey data, such as the mean, median, mode, and standard deviation. These statistics can help researchers understand the distribution of responses and identify any trends or patterns.
  • Inferential statistics: This method is used to make inferences about the larger population based on the data collected in the survey. Common inferential statistical methods include hypothesis testing, regression analysis, and correlation analysis.
  • Factor analysis: This method is used to identify underlying factors or dimensions in the survey data. This can help researchers simplify the data and identify patterns and relationships that may not be immediately apparent.
  • Cluster analysis: This method is used to group similar respondents together based on their survey responses. This can help researchers identify subgroups within the larger population and understand how different groups may differ in their attitudes, behaviors, or preferences.
  • Structural equation modeling: This method is used to test complex relationships between variables in the survey data. It can help researchers understand how different variables may be related to one another and how they may influence one another.
  • Content analysis: This method is used to analyze open-ended responses in the survey data. Researchers may use software to identify themes or categories in the responses, or they may manually review and code the responses.
  • Text mining: This method is used to analyze text-based survey data, such as responses to open-ended questions. Researchers may use software to identify patterns and themes in the text, or they may manually review and code the text.

Applications of Survey Research

Here are some common applications of survey research:

  • Market Research: Companies use survey research to gather insights about customer needs, preferences, and behavior. These insights are used to create marketing strategies and develop new products.
  • Public Opinion Research: Governments and political parties use survey research to understand public opinion on various issues. This information is used to develop policies and make decisions.
  • Social Research: Survey research is used in social research to study social trends, attitudes, and behavior. Researchers use survey data to explore topics such as education, health, and social inequality.
  • Academic Research: Survey research is used in academic research to study various phenomena. Researchers use survey data to test theories, explore relationships between variables, and draw conclusions.
  • Customer Satisfaction Research: Companies use survey research to gather information about customer satisfaction with their products and services. This information is used to improve customer experience and retention.
  • Employee Surveys: Employers use survey research to gather feedback from employees about their job satisfaction, working conditions, and organizational culture. This information is used to improve employee retention and productivity.
  • Health Research: Survey research is used in health research to study topics such as disease prevalence, health behaviors, and healthcare access. Researchers use survey data to develop interventions and improve healthcare outcomes.

Examples of Survey Research

Here are some real-time examples of survey research:

  • COVID-19 Pandemic Surveys: Since the outbreak of the COVID-19 pandemic, surveys have been conducted to gather information about public attitudes, behaviors, and perceptions related to the pandemic. Governments and healthcare organizations have used this data to develop public health strategies and messaging.
  • Political Polls During Elections: During election seasons, surveys are used to measure public opinion on political candidates, policies, and issues in real-time. This information is used by political parties to develop campaign strategies and make decisions.
  • Customer Feedback Surveys: Companies often use real-time customer feedback surveys to gather insights about customer experience and satisfaction. This information is used to improve products and services quickly.
  • Event Surveys: Organizers of events such as conferences and trade shows often use surveys to gather feedback from attendees in real-time. This information can be used to improve future events and make adjustments during the current event.
  • Website and App Surveys: Website and app owners use surveys to gather real-time feedback from users about the functionality, user experience, and overall satisfaction with their platforms. This feedback can be used to improve the user experience and retain customers.
  • Employee Pulse Surveys: Employers use real-time pulse surveys to gather feedback from employees about their work experience and overall job satisfaction. This feedback is used to make changes in real-time to improve employee retention and productivity.

Survey Sample

Purpose of survey research.

The purpose of survey research is to gather data and insights from a representative sample of individuals. Survey research allows researchers to collect data quickly and efficiently from a large number of people, making it a valuable tool for understanding attitudes, behaviors, and preferences.

Here are some common purposes of survey research:

  • Descriptive Research: Survey research is often used to describe characteristics of a population or a phenomenon. For example, a survey could be used to describe the characteristics of a particular demographic group, such as age, gender, or income.
  • Exploratory Research: Survey research can be used to explore new topics or areas of research. Exploratory surveys are often used to generate hypotheses or identify potential relationships between variables.
  • Explanatory Research: Survey research can be used to explain relationships between variables. For example, a survey could be used to determine whether there is a relationship between educational attainment and income.
  • Evaluation Research: Survey research can be used to evaluate the effectiveness of a program or intervention. For example, a survey could be used to evaluate the impact of a health education program on behavior change.
  • Monitoring Research: Survey research can be used to monitor trends or changes over time. For example, a survey could be used to monitor changes in attitudes towards climate change or political candidates over time.

When to use Survey Research

there are certain circumstances where survey research is particularly appropriate. Here are some situations where survey research may be useful:

  • When the research question involves attitudes, beliefs, or opinions: Survey research is particularly useful for understanding attitudes, beliefs, and opinions on a particular topic. For example, a survey could be used to understand public opinion on a political issue.
  • When the research question involves behaviors or experiences: Survey research can also be useful for understanding behaviors and experiences. For example, a survey could be used to understand the prevalence of a particular health behavior.
  • When a large sample size is needed: Survey research allows researchers to collect data from a large number of people quickly and efficiently. This makes it a useful method when a large sample size is needed to ensure statistical validity.
  • When the research question is time-sensitive: Survey research can be conducted quickly, which makes it a useful method when the research question is time-sensitive. For example, a survey could be used to understand public opinion on a breaking news story.
  • When the research question involves a geographically dispersed population: Survey research can be conducted online, which makes it a useful method when the population of interest is geographically dispersed.

How to Conduct Survey Research

Conducting survey research involves several steps that need to be carefully planned and executed. Here is a general overview of the process:

  • Define the research question: The first step in conducting survey research is to clearly define the research question. The research question should be specific, measurable, and relevant to the population of interest.
  • Develop a survey instrument : The next step is to develop a survey instrument. This can be done using various methods, such as online survey tools or paper surveys. The survey instrument should be designed to elicit the information needed to answer the research question, and should be pre-tested with a small sample of individuals.
  • Select a sample : The sample is the group of individuals who will be invited to participate in the survey. The sample should be representative of the population of interest, and the size of the sample should be sufficient to ensure statistical validity.
  • Administer the survey: The survey can be administered in various ways, such as online, by mail, or in person. The method of administration should be chosen based on the population of interest and the research question.
  • Analyze the data: Once the survey data is collected, it needs to be analyzed. This involves summarizing the data using statistical methods, such as frequency distributions or regression analysis.
  • Draw conclusions: The final step is to draw conclusions based on the data analysis. This involves interpreting the results and answering the research question.

Advantages of Survey Research

There are several advantages to using survey research, including:

  • Efficient data collection: Survey research allows researchers to collect data quickly and efficiently from a large number of people. This makes it a useful method for gathering information on a wide range of topics.
  • Standardized data collection: Surveys are typically standardized, which means that all participants receive the same questions in the same order. This ensures that the data collected is consistent and reliable.
  • Cost-effective: Surveys can be conducted online, by mail, or in person, which makes them a cost-effective method of data collection.
  • Anonymity: Participants can remain anonymous when responding to a survey. This can encourage participants to be more honest and open in their responses.
  • Easy comparison: Surveys allow for easy comparison of data between different groups or over time. This makes it possible to identify trends and patterns in the data.
  • Versatility: Surveys can be used to collect data on a wide range of topics, including attitudes, beliefs, behaviors, and preferences.

Limitations of Survey Research

Here are some of the main limitations of survey research:

  • Limited depth: Surveys are typically designed to collect quantitative data, which means that they do not provide much depth or detail about people’s experiences or opinions. This can limit the insights that can be gained from the data.
  • Potential for bias: Surveys can be affected by various biases, including selection bias, response bias, and social desirability bias. These biases can distort the results and make them less accurate.
  • L imited validity: Surveys are only as valid as the questions they ask. If the questions are poorly designed or ambiguous, the results may not accurately reflect the respondents’ attitudes or behaviors.
  • Limited generalizability : Survey results are only generalizable to the population from which the sample was drawn. If the sample is not representative of the population, the results may not be generalizable to the larger population.
  • Limited ability to capture context: Surveys typically do not capture the context in which attitudes or behaviors occur. This can make it difficult to understand the reasons behind the responses.
  • Limited ability to capture complex phenomena: Surveys are not well-suited to capture complex phenomena, such as emotions or the dynamics of interpersonal relationships.

Following is an example of a Survey Sample:

Welcome to our Survey Research Page! We value your opinions and appreciate your participation in this survey. Please answer the questions below as honestly and thoroughly as possible.

1. What is your age?

  • A) Under 18
  • G) 65 or older

2. What is your highest level of education completed?

  • A) Less than high school
  • B) High school or equivalent
  • C) Some college or technical school
  • D) Bachelor’s degree
  • E) Graduate or professional degree

3. What is your current employment status?

  • A) Employed full-time
  • B) Employed part-time
  • C) Self-employed
  • D) Unemployed

4. How often do you use the internet per day?

  •  A) Less than 1 hour
  • B) 1-3 hours
  • C) 3-5 hours
  • D) 5-7 hours
  • E) More than 7 hours

5. How often do you engage in social media per day?

6. Have you ever participated in a survey research study before?

7. If you have participated in a survey research study before, how was your experience?

  • A) Excellent
  • E) Very poor

8. What are some of the topics that you would be interested in participating in a survey research study about?

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

9. How often would you be willing to participate in survey research studies?

  • A) Once a week
  • B) Once a month
  • C) Once every 6 months
  • D) Once a year

10. Any additional comments or suggestions?

Thank you for taking the time to complete this survey. Your feedback is important to us and will help us improve our survey research efforts.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

  • Survey Research: Types, Examples & Methods

busayo.longe

Surveys have been proven to be one of the most effective methods of conducting research. They help you to gather relevant data from a large audience, which helps you to arrive at a valid and objective conclusion. 

Just like other research methods, survey research had to be conducted the right way to be effective. In this article, we’ll dive into the nitty-gritty of survey research and show you how to get the most out of it. 

What is Survey Research? 

Survey research is simply a systematic investigation conducted via a survey. In other words, it is a type of research carried out by administering surveys to respondents. 

Surveys already serve as a great method of opinion sampling and finding out what people think about different contexts and situations. Applying this to research means you can gather first-hand information from persons affected by specific contexts. 

Survey research proves useful in numerous primary research scenarios. Consider the case whereby a restaurant wants to gather feedback from its customers on its new signatory dish. A good way to do this is to conduct survey research on a defined customer demographic. 

By doing this, the restaurant is better able to gather primary data from the customers (respondents) with regards to what they think and feel about the new dish across multiple facets. This means they’d have more valid and objective information to work with. 

Why Conduct Survey Research?  

One of the strongest arguments for survey research is that it helps you gather the most authentic data sets in the systematic investigation. Survey research is a gateway to collecting specific information from defined respondents, first-hand.  

Surveys combine different question types that make it easy for you to collect numerous information from respondents. When you come across a questionnaire for survey research, you’re likely to see a neat blend of close-ended and open-ended questions, together with other survey response scale questions. 

Apart from what we’ve discussed so far, here are some other reasons why survey research is important: 

  • It gives you insights into respondents’ behaviors and preferences which is valid in any systematic investigation.
  • Many times, survey research is structured in an interactive manner which makes it easier for respondents to communicate their thoughts and experiences. 
  • It allows you to gather important data that proves useful for product improvement; especially in market research. 

Characteristics of Survey Research

  • Usage : Survey research is mostly deployed in the field of social science; especially to gather information about human behavior in different social contexts. 
  • Systematic : Like other research methods, survey research is systematic. This means that it is usually conducted in line with empirical methods and follows specific processes.
  • Replicable : In survey research, applying the same methods often translates to achieving similar results. 
  • Types : Survey research can be conducted using forms (offline and online) or via structured, semi-structured, and unstructured interviews . 
  • Data : The data gathered from survey research is mostly quantitative; although it can be qualitative. 
  • Impartial Sampling : The data sample in survey research is random and not subject to unavoidable biases.
  • Ecological Validity : Survey research often makes use of data samples obtained from real-world occurrences. 

Types of Survey Research

Survey research can be subdivided into different types based on its objectives, data source, and methodology. 

Types of Survey Research Based on Objective

  • Exploratory Survey Research

Exploratory survey research is aimed at finding out more about the research context. Here, the survey research pays attention to discovering new ideas and insights about the research subject(s) or contexts. 

Exploratory survey research is usually made up of open-ended questions that allow respondents to fully communicate their thoughts and varying perspectives on the subject matter. In many cases, systematic investigation kicks off with an exploratory research survey. 

  • Predictive Survey Research

This type of research is also referred to as causal survey research because it pays attention to the causative relationship between the variables in the survey research. In other words, predictive survey research pays attention to existing patterns to explain the relationship between two variables. 

It can also be referred to as conclusive research because it allows you to identify causal variables and resultant variables; that is cause and effect. Predictive variables allow you to determine the nature of the relationship between the causal variables and the effect to be predicted. 

  • Descriptive Survey Research

Unlike predictive research, descriptive survey research is largely observational. It is ideal for quantitative research because it helps you to gather numeric data. 

The questions listed in descriptive survey research help you to uncover new insights into the actions, thoughts, and feelings of survey respondents. With this data, you can know the extent to which different conditions can be obtained among these subjects. 

Types of Survey Research Based on Data Source

  • Secondary Data

Survey research can be designed to collect and process secondary data. Secondary data is a type of data that has been collected from primary sources in the past and is readily available for use. It is the type of data that is already existing.

Since secondary data is gathered from third-party sources, it is mostly generic, unlike primary data that is specific to the research context. Common sources of secondary data in survey research include books, data collected through other surveys, online data, data from government archives, and libraries. 

  • Primary Data

This is the type of research data that is collected directly; that is, data collected from first-hand sources. Primary data is usually tailored to a specific research context so that reflects the aims and objectives of the systematic investigation.

One of the strongest points of primary data over its secondary counterpart is validity. Because it is collected directly from first-hand sources, primary data typically results in objective research findings. 

You can collect primary data via interviews, surveys, and questionnaires, and observation methods. 

Types of Survey Research Based on Methodology

  • Quantitative Research

Quantitative research is a common research method that is used to gather numerical data in a systematic investigation. It is often deployed in research contexts that require statistical information to arrive at valid results such as in social science or science. 

For instance, as an organization looking to find out how many persons are using your product in a particular location, you can administer survey research to collect useful quantitative data. Other quantitative research methods include polls, face-to-face interviews, and systematic observation. 

  • Qualitative Research

This is a method of systematic investigation that is used to collect non-numerical data from research participants. In other words, it is a research method that allows you to gather open-ended information from your target audience. 

Typically, organizations deploy qualitative research methods when they need to gather descriptive data from their customers; for example, when they need to collect customer feedback in product evaluation. Qualitative research methods include one-on-one interviews, observation, case studies, and focus groups. 

Survey Research Scales

  • Nominal Scale

This is a type of survey research scale that uses numbers to label the different answer options in a survey. On a nominal scale , the numbers have no value in themselves; they simply serve as labels for qualitative variables in the survey. 

In cases where a nominal scale is used for identification, there is typically a specific one-on-one relationship between the numeric value and the variable it represents. On the other hand, when the variable is used for classification, then each number on the scale serves as a label or a tag. 

Examples of Nominal Scale in Survey Research 

1. How would you describe your complexion? 

2. Have you used this product?

  • Ordinal Scale

This is a type of variable measurement scale that arranges answer options in a specific ranking order without necessarily indicating the degree of variation between these options. Ordinal data is qualitative and can be named, ranked, or grouped. 

In an ordinal scale , the different properties of the variables are relatively unknown, and it also identifies, describes, and shows the rank of the different variables. With an ordered scale, it is easier for researchers to measure the degree of agreement and/or disagreement with different variables. 

With ordinal scales, you can measure non-numerical attributes such as the degree of happiness, agreement, or opposition of respondents in specific contexts. Using an ordinal scale makes it easy for you to compare variables and process survey responses accordingly. 

Examples of Ordinal Scale in Survey Research

1. How often do you use this product?

  • Prefer not to say

2. How much do you agree with our new policies? 

  • Totally agree
  • Somewhat agree
  • Totally disagree
  • Interval Scale

This is a type of survey scale that is used to measure variables existing at equal intervals along a common scale. In some way, it combines the attributes of nominal and ordinal scales since it is used where there is order and there is a meaningful difference between 2 variables. 

With an interval scale, you can quantify the difference in value between two variables in survey research. In addition to this, you can carry out other mathematical processes like calculating the mean and median of research variables. 

Examples of Interval Scale in Survey Research

1. Our customer support team was very effective. 

  • Completely agree
  • Neither agree nor disagree
  • Somewhat disagree
  • Completely disagree 

2. I enjoyed using this product.

Another example of an interval scale can be seen in the Net Promoter Score.

  • Ratio Scale

Just like the interval scale, the ratio scale is quantitative and it is used when you need to compare intervals or differences in survey research. It is the highest level of measurement and it is made up of bits and pieces of the other survey scales. 

One of the unique features of the ratio scale is it has a true zero and equal intervals between the variables on the scale. This zero indicates an absence of the variable being measured by the scale. Common occurrences of ratio scales can be seen with distance (length), area, and population measurement. 

Examples of Ratio Scale in Survey Research

1. How old are you?

  • Below 18 years
  • 41 and above

2. How many times do you shop in a week?

  • Less than twice
  • Three times
  • More than four times

Uses of Survey Research

  • Health Surveys

Survey research is used by health practitioners to gather useful data from patients in different medical and safety contexts. It helps you to gather primary and secondary data about medical conditions and risk factors of multiple diseases and infections. 

In addition to this, administering health surveys regularly helps you to monitor the overall health status of your population; whether in the workplace, school, or community. This kind of data can be used to help prevent outbreaks and minimize medical emergencies in these contexts. 

Survey research is also useful when conducting polls; whether online or offline. A poll is a data collection tool that helps you to gather public opinion about a particular subject from a well-defined research sample.

By administering survey research, you can gather valid data from a well-defined research sample, and utilize research findings for decision making. For example, during elections, individuals can be asked to choose their preferred leader via questionnaires administered as part of survey research.

  • Customer Satisfaction

Customer satisfaction is one of the cores of every organization as it is directly concerned with how well your product or service meets the needs of your clients. Survey research is an effective way to measure customer satisfaction at different intervals. 

As a restaurant, for example, you can send out online surveys to customers immediately when they patronize your business. In these surveys, encourage them to provide feedback on their experience and to provide information on how your service delivery can be improved. 

Survey research makes data collection and analysis easy during a census. With an online survey tool like Formplus , you can seamlessly gather data during a census without moving from a spot. Formplus has multiple sharing options that help you collect information without stress. 

Survey Research Methods

Survey research can be done using different online and offline methods. Let’s examine a few of them here.

  • Telephone Surveys

This is a means of conducting survey research via phone calls. In a telephone survey, the researcher places a call to the survey respondents and gathers information from them by asking questions about the research context under consideration. 

A telephone survey is a kind of simulation of the face-to-face survey experience since it involves discussing with respondents to gather and process valid data. However, major challenges with this method include the fact that it is expensive and time-consuming. 

  • Online Surveys

An online survey is a data collection tool used to create and administer surveys and questionnaires using data tools like Formplus. Online surveys work better than paper forms and other offline survey methods because you can easily gather and process data from a large sample size with them. 

  • Face-to-Face Interviews

Face-to-face interviews for survey research can be structured, semi-structured, or unstructured depending on the research context and the type of data you want to collect. If you want to gather qualitative data , then unstructured and semi-structured interviews are the way to go. 

On the other hand, if you want to collect quantifiable information from your research sample, conducting a structured interview is the best way to go. Face-to-face interviews can also be time-consuming and cost-intensive. Let’s mention here that face-to-face surveys are one of the most widely used methods of survey data collection. 

How to Conduct Research Surveys on Formplus 

With Formplus, you can create forms for survey research without any hassles. Follow this step-by-step guide to create and administer online surveys for research via Formplus. 

1. Sign up at www.formpl.us to create your Formplus account. If you already have a Formplus account, click here to log in.

5. Use the form customization options to change the appearance of your survey. You can add your organization’s logo to the survey, change the form font and layout, and insert preferred background images.

Advantages of Survey Research

  • It is inexpensive – with survey research, you can avoid the cost of in-person interviews. It’s also easy to receive data as you can share your surveys online and get responses from a large demographic
  • It is the fastest way to get a large amount of first-hand data
  • Surveys allow you to compare the results you get through charts and graphs
  • It is versatile as it can be used for any research topic
  • Surveys are perfect for anonymous respondents in the research 

Disadvantages of Survey Research

  • Some questions may not get answers
  • People may understand survey questions differently
  • It may not be the best option for respondents with visual or hearing impairments as well as a demographic with no literacy levels
  • People can provide dishonest answers in a survey research

Conclusion 

In this article, we’ve discussed survey research extensively; touching on different important aspects of this concept. As a researcher, organization, individual, or student, it is important to understand how survey research works to utilize it effectively and get the most from this method of systematic investigation. 

As we’ve already stated, conducting survey research online is one of the most effective methods of data collection as it allows you to gather valid data from a large group of respondents. If you’re looking to kick off your survey research, you can start by signing up for a Formplus account here. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • ethnographic research survey
  • survey research
  • survey research method
  • busayo.longe

Formplus

You may also like:

Cluster Sampling Guide: Types, Methods, Examples & Uses

In this guide, we’d explore different types of cluster sampling and show you how to apply this technique to market research.

what type of research study is a survey

Cobra Effect & Perverse Survey Incentives: Definition, Implications & Examples

In this post, we will discuss the origin of the Cobra effect, its implication, and some examples

Need More Survey Respondents? Best Survey Distribution Methods to Try

This post offers the viable options you can consider when scouting for survey audiences.

Goodhart’s Law: Definition, Implications & Examples

In this article, we will discuss Goodhart’s law in different fields, especially in survey research, and how you can avoid it.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Employee Exit Interviews
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Market Research
  • Artificial Intelligence
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • What is a survey?
  • Survey Research

Try Qualtrics for free

What is survey research.

15 min read Find out everything you need to know about survey research, from what it is and how it works to the different methods and tools you can use to ensure you’re successful.

Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall .

As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions. But survey research needs careful planning and execution to get the results you want.

So if you’re thinking about using surveys to carry out research, read on.

Get started with our free survey maker tool

Types of survey research

Calling these methods ‘survey research’ slightly underplays the complexity of this type of information gathering. From the expertise required to carry out each activity to the analysis of the data and its eventual application, a considerable amount of effort is required.

As for how you can carry out your research, there are several options to choose from — face-to-face interviews, telephone surveys, focus groups (though more interviews than surveys), online surveys , and panel surveys.

Typically, the survey method you choose will largely be guided by who you want to survey, the size of your sample , your budget, and the type of information you’re hoping to gather.

Here are a few of the most-used survey types:

Face-to-face interviews

Before technology made it possible to conduct research using online surveys, telephone, and mail were the most popular methods for survey research. However face-to-face interviews were considered the gold standard — the only reason they weren’t as popular was due to their highly prohibitive costs.

When it came to face-to-face interviews, organizations would use highly trained researchers who knew when to probe or follow up on vague or problematic answers. They also knew when to offer assistance to respondents when they seemed to be struggling. The result was that these interviewers could get sample members to participate and engage in surveys in the most effective way possible, leading to higher response rates and better quality data.

Telephone surveys

While phone surveys have been popular in the past, particularly for measuring general consumer behavior or beliefs, response rates have been declining since the 1990s .

Phone surveys are usually conducted using a random dialing system and software that a researcher can use to record responses.

This method is beneficial when you want to survey a large population but don’t have the resources to conduct face-to-face research surveys or run focus groups, or want to ask multiple-choice and open-ended questions .

The downsides are they can: take a long time to complete depending on the response rate, and you may have to do a lot of cold-calling to get the information you need.

You also run the risk of respondents not being completely honest . Instead, they’ll answer your survey questions quickly just to get off the phone.

Focus groups (interviews — not surveys)

Focus groups are a separate qualitative methodology rather than surveys — even though they’re often bunched together. They’re normally used for survey pretesting and designing , but they’re also a great way to generate opinions and data from a diverse range of people.

Focus groups involve putting a cohort of demographically or socially diverse people in a room with a moderator and engaging them in a discussion on a particular topic, such as your product, brand, or service.

They remain a highly popular method for market research , but they’re expensive and require a lot of administration to conduct and analyze the data properly.

You also run the risk of more dominant members of the group taking over the discussion and swaying the opinions of other people — potentially providing you with unreliable data.

Online surveys

Online surveys have become one of the most popular survey methods due to being cost-effective, enabling researchers to accurately survey a large population quickly.

Online surveys can essentially be used by anyone for any research purpose – we’ve all seen the increasing popularity of polls on social media (although these are not scientific).

Using an online survey allows you to ask a series of different question types and collect data instantly that’s easy to analyze with the right software.

There are also several methods for running and distributing online surveys that allow you to get your questionnaire in front of a large population at a fraction of the cost of face-to-face interviews or focus groups.

This is particularly true when it comes to mobile surveys as most people with a smartphone can access them online.

However, you have to be aware of the potential dangers of using online surveys, particularly when it comes to the survey respondents. The biggest risk is because online surveys require access to a computer or mobile device to complete, they could exclude elderly members of the population who don’t have access to the technology — or don’t know how to use it.

It could also exclude those from poorer socio-economic backgrounds who can’t afford a computer or consistent internet access. This could mean the data collected is more biased towards a certain group and can lead to less accurate data when you’re looking for a representative population sample.

When it comes to surveys, every voice matters.

Find out how to create more inclusive and representative surveys for your research.

Panel surveys

A panel survey involves recruiting respondents who have specifically signed up to answer questionnaires and who are put on a list by a research company. This could be a workforce of a small company or a major subset of a national population. Usually, these groups are carefully selected so that they represent a sample of your target population — giving you balance across criteria such as age, gender, background, and so on.

Panel surveys give you access to the respondents you need and are usually provided by the research company in question. As a result, it’s much easier to get access to the right audiences as you just need to tell the research company your criteria. They’ll then determine the right panels to use to answer your questionnaire.

However, there are downsides. The main one being that if the research company offers its panels incentives, e.g. discounts, coupons, money — respondents may answer a lot of questionnaires just for the benefits.

This might mean they rush through your survey without providing considered and truthful answers. As a consequence, this can damage the credibility of your data and potentially ruin your analyses.

What are the benefits of using survey research?

Depending on the research method you use, there are lots of benefits to conducting survey research for data collection. Here, we cover a few:

1.   They’re relatively easy to do

Most research surveys are easy to set up, administer and analyze. As long as the planning and survey design is thorough and you target the right audience , the data collection is usually straightforward regardless of which survey type you use.

2.   They can be cost effective

Survey research can be relatively cheap depending on the type of survey you use.

Generally, qualitative research methods that require access to people in person or over the phone are more expensive and require more administration.

Online surveys or mobile surveys are often more cost-effective for market research and can give you access to the global population for a fraction of the cost.

3.   You can collect data from a large sample

Again, depending on the type of survey, you can obtain survey results from an entire population at a relatively low price. You can also administer a large variety of survey types to fit the project you’re running.

4.   You can use survey software to analyze results immediately

Using survey software, you can use advanced statistical analysis techniques to gain insights into your responses immediately.

Analysis can be conducted using a variety of parameters to determine the validity and reliability of your survey data at scale.

5.   Surveys can collect any type of data

While most people view surveys as a quantitative research method, they can just as easily be adapted to gain qualitative information by simply including open-ended questions or conducting interviews face to face.

How to measure concepts with survey questions

While surveys are a great way to obtain data, that data on its own is useless unless it can be analyzed and developed into actionable insights.

The easiest, and most effective way to measure survey results, is to use a dedicated research tool that puts all of your survey results into one place.

When it comes to survey measurement, there are four measurement types to be aware of that will determine how you treat your different survey results:

Nominal scale

With a nominal scale , you can only keep track of how many respondents chose each option from a question, and which response generated the most selections.

An example of this would be simply asking a responder to choose a product or brand from a list.

You could find out which brand was chosen the most but have no insight as to why.

Ordinal scale

Ordinal scales are used to judge an order of preference. They do provide some level of quantitative value because you’re asking responders to choose a preference of one option over another.

Ratio scale

Ratio scales can be used to judge the order and difference between responses. For example, asking respondents how much they spend on their weekly shopping on average.

Interval scale

In an interval scale, values are lined up in order with a meaningful difference between the two values — for example, measuring temperature or measuring a credit score between one value and another.

Step by step: How to conduct surveys and collect data

Conducting a survey and collecting data is relatively straightforward, but it does require some careful planning and design to ensure it results in reliable data.

Step 1 – Define your objectives

What do you want to learn from the survey? How is the data going to help you? Having a hypothesis or series of assumptions about survey responses will allow you to create the right questions to test them.

Step 2 – Create your survey questions

Once you’ve got your hypotheses or assumptions, write out the questions you need answering to test your theories or beliefs. Be wary about framing questions that could lead respondents or inadvertently create biased responses .

Step 3 – Choose your question types

Your survey should include a variety of question types and should aim to obtain quantitative data with some qualitative responses from open-ended questions. Using a mix of questions (simple Yes/ No, multiple-choice, rank in order, etc) not only increases the reliability of your data but also reduces survey fatigue and respondents simply answering questions quickly without thinking.

Find out how to create a survey that’s easy to engage with

Step 4 – Test your questions

Before sending your questionnaire out, you should test it (e.g. have a random internal group do the survey) and carry out A/B tests to ensure you’ll gain accurate responses.

Step 5 – Choose your target and send out the survey

Depending on your objectives, you might want to target the general population with your survey or a specific segment of the population. Once you’ve narrowed down who you want to target, it’s time to send out the survey.

After you’ve deployed the survey, keep an eye on the response rate to ensure you’re getting the number you expected. If your response rate is low, you might need to send the survey out to a second group to obtain a large enough sample — or do some troubleshooting to work out why your response rates are so low. This could be down to your questions, delivery method, selected sample, or otherwise.

Step 6 – Analyze results and draw conclusions

Once you’ve got your results back, it’s time for the fun part.

Break down your survey responses using the parameters you’ve set in your objectives and analyze the data to compare to your original assumptions. At this stage, a research tool or software can make the analysis a lot easier — and that’s somewhere Qualtrics can help.

Get reliable insights with survey software from Qualtrics

Gaining feedback from customers and leads is critical for any business, data gathered from surveys can prove invaluable for understanding your products and your market position, and with survey software from Qualtrics, it couldn’t be easier.

Used by more than 13,000 brands and supporting more than 1 billion surveys a year, Qualtrics empowers everyone in your organization to gather insights and take action. No coding required — and your data is housed in one system.

Get feedback from more than 125 sources on a single platform and view and measure your data in one place to create actionable insights and gain a deeper understanding of your target customers .

Automatically run complex text and statistical analysis to uncover exactly what your survey data is telling you, so you can react in real-time and make smarter decisions.

We can help you with survey management, too. From designing your survey and finding your target respondents to getting your survey in the field and reporting back on the results, we can help you every step of the way.

And for expert market researchers and survey designers, Qualtrics features custom programming to give you total flexibility over question types, survey design, embedded data, and other variables.

No matter what type of survey you want to run, what target audience you want to reach, or what assumptions you want to test or answers you want to uncover, we’ll help you design, deploy and analyze your survey with our team of experts.

Ready to find out more about Qualtrics CoreXM?

Get started with our free survey maker tool today

Related resources

Survey bias types 24 min read, post event survey questions 10 min read, best survey software 16 min read, close-ended questions 7 min read, survey vs questionnaire 12 min read, response bias 13 min read, double barreled question 11 min read, request demo.

Ready to learn more about Qualtrics?

Book cover

Quantitative Methods for the Social Sciences pp 23–35 Cite as

A Short Introduction to Survey Research

  • Daniel Stockemer 2  
  • First Online: 20 November 2018

158k Accesses

1 Citations

This chapter offers a brief introduction into survey research. In the first part of the chapter, students learn about the importance of survey research in the social and behavioral sciences, substantive research areas where survey research is frequently used, and important cross-national survey such as the World Values Survey and the European Social Survey. In the second, I introduce different types of surveys.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF

Tax calculation will be finalised at checkout

Purchases are for personal use only

In the literature, such reversed causation is often referred to as an endogeneity problem.

Almond, G., & Verba, S. (1963) [1989]. The civic culture: Political attitudes and democracy in five nations. Newbury Park, CA: Sage.

Google Scholar  

Archer, K., & Berdahl, L. (2011). Explorations: Conducting empirical research in canadian political science . Toronto: Oxford University Press.

Behnke, J., Baur, N., & Behnke, N. (2006). Empirische Methoden der Politikwissenschaft . Paderborn: Schöningh.

Brady, H. E., Verba, S., & Schlozman, K. L. (1995). Beyond SES: A resource model of political participation. American Political Science Review, 89 (2), 271–294.

Article   Google Scholar  

Burnham, P., Lutz, G. L., Grant, W., & Layton-Henry, Z. (2008). Research methods in politics (2nd ed.). Basingstoke, Hampshire: Palgrave Macmillan.

Book   Google Scholar  

Converse, J. M. (2011). Survey research in the United States: Roots and emergence 1890–1960 . Picataway: Transaction.

De Leeuw, E. D., Hox, J. J., & Dillman, D. A. (2008). The cornerstones of survey research. In E. D. De Leeuw, J. J. Hox, & D. A. Dillman (Eds.), International handbook of survey methodology . New York: Lawrence Erlbaum Associates.

De Vaus, D. (2001). Research design in social research . London: Sage.

ESS (European Social Survey). (2017). Source questionnaire . Retrieved August 7, 2017, from http://www.europeansocialsurvey.org/methodology/ess_methodology/source_questionnaire/

Fowler, F. J. (2009). Survey research methods (4th ed.). Thousand Oaks, CA: Sage.

Frees, E. W. (2004). Longitudinal and panel data: Analysis and applications in the social sciences . Cambridge: Cambridge University Press.

Hooper, K. (2006). Using William the conqueror’s accounting record to assess manorial efficiency: A critical appraisal. Accounting History, 11 (1), 63–72.

Hurtienne, T., & Kaufmann, G. (2015). Methodological biases: Inglehart’s world value survey and Q methodology . Berlin: Folhas do NAEA.

Loosveldt, G. (2008). Face-to-face interviews . In E. D. De Leeuw, J. J. Hox, & D. A. Dillman (Eds.), International handbook of survey methodology . New York: Lawrence Erlbaum Associates.

Petty, W., & Graunt, J. (1899). The economic writings of Sir William Petty (Vol. 1). London: University Press.

Putnam, R. D. (2001). Bowling alone: The collapse and revival of American community . New York: Simon and Schuster.

Schnell, R., Hill, P. B., & Esser, E. (2011). Methoden der empirischen Sozialforschung (9th ed.). München: Oldenbourg.

Schumann, S. (2012). Repräsentative Umfrage: Praxisorientierte Einführung in empirische Methoden und statistische Analyseverfahren (6th ed.). München: Oldenbourg.

Willcox, W. F. (1934). Note on the chronology of statistical societies. Journal of the American Statistical Association, 29 (188), 418–420.

Wood, E. J. (2003). Insurgent collective action and civil war in El Salvador . Cambridge: Cambridge University Press.

Further Reading

Why do we need survey research.

Converse, J. M. (2017). Survey research in the United States: Roots and emergence 1890–1960. New York: Routledge. This book has more of an historical ankle. It tackles the history of survey research in the United States.

Davidov, E., Schmidt, P., & Schwartz, S. H. (2008). Bringing values back in: The adequacy of the European Social Survey to measure values in 20 countries. Public Opinion Quarterly, 72 (3), 420–445. This rather short article highlights the importance of conducting a large pan-European survey to measure European’s social and political beliefs.

Schmitt, H., Hobolt, S. B., Popa, S. A., & Teperoglou, E. (2015). European parliament election study 2014, voter study. GESIS Data Archive, Cologne. ZA5160 Data file Version , 2 (0). The European Voter Study is another important election study that researchers and students can access freely. It provides a comprehensive battery of variables about voting, political preferences, vote choice, demographics, and political and social opinions of the electorate.

Applied Survey Research

Almond, G. A., & Verba, S. (1963). The civic culture: Political attitudes and democracy in five nations. Princeton: Princeton University Press. Almond’s and Verba’s masterpiece is a seminal work in survey research measuring citizens’ political and civic attitudes in key Western democracies. The book is also one of the first books that systematically uses survey research to measure political traits.

Inglehart, R., & Welzel, C. (2005). Modernization, cultural change, and democracy: The human development sequence . Cambridge: Cambridge University Press. This is an influential book, which uses data from the World Values Survey to explain modernization as a process that changes individual’s values away from traditional and patriarchal values and toward post-materialist values including environmental protection, minority rights, and gender equality.

Download references

Author information

Authors and affiliations.

School of Political Studies, University of Ottawa, Ottawa, Ontario, Canada

Daniel Stockemer

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer International Publishing AG

About this chapter

Cite this chapter.

Stockemer, D. (2019). A Short Introduction to Survey Research. In: Quantitative Methods for the Social Sciences. Springer, Cham. https://doi.org/10.1007/978-3-319-99118-4_3

Download citation

DOI : https://doi.org/10.1007/978-3-319-99118-4_3

Published : 20 November 2018

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-99117-7

Online ISBN : 978-3-319-99118-4

eBook Packages : Social Sciences Social Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

A Comprehensive Guide to Survey Research Methodologies

For decades, researchers and businesses have used survey research to produce statistical data and explore ideas. The survey process is simple, ask questions and analyze the responses to make decisions. Data is what makes the difference between a valid and invalid statement and as the American statistician, W. Edwards Deming said:

“Without data, you’re just another person with an opinion.” - W. Edwards Deming

In this article, we will discuss what survey research is, its brief history, types, common uses, benefits, and the step-by-step process of designing a survey.

What is Survey Research

A survey is a research method that is used to collect data from a group of respondents in order to gain insights and information regarding a particular subject. It’s an excellent method to gather opinions and understand how and why people feel a certain way about different situations and contexts.

Brief History of Survey Research

Survey research may have its roots in the American and English “social surveys” conducted around the turn of the 20th century. The surveys were mainly conducted by researchers and reformers to document the extent of social issues such as poverty. ( 1 ) Despite being a relatively young field to many scientific domains, survey research has experienced three stages of development ( 2 ):

-       First Era (1930-1960)

-       Second Era (1960-1990)

-       Third Era (1990 onwards)

Over the years, survey research adapted to the changing times and technologies. By exploiting the latest technologies, researchers can gain access to the right population from anywhere in the world, analyze the data like never before, and extract useful information.

Survey Research Methods & Types

Survey research can be classified into seven categories based on objective, data sources, methodology, deployment method, and frequency of deployment.

Types of survey research based on objective, data source, methodology, deployment method, and frequency of deployment.

Surveys based on Objective

Exploratory survey research.

Exploratory survey research is aimed at diving deeper into research subjects and finding out more about their context. It’s important for marketing or business strategy and the focus is to discover ideas and insights instead of gathering statistical data.

Generally, exploratory survey research is composed of open-ended questions that allow respondents to express their thoughts and perspectives. The final responses present information from various sources that can lead to fresh initiatives.

Predictive Survey Research

Predictive survey research is also called causal survey research. It’s preplanned, structured, and quantitative in nature. It’s often referred to as conclusive research as it tries to explain the cause-and-effect relationship between different variables. The objective is to understand which variables are causes and which are effects and the nature of the relationship between both variables.

Descriptive Survey Research

Descriptive survey research is largely observational and is ideal for gathering numeric data. Due to its quantitative nature, it’s often compared to exploratory survey research. The difference between the two is that descriptive research is structured and pre-planned.

 The idea behind descriptive research is to describe the mindset and opinion of a particular group of people on a given subject. The questions are every day multiple choices and users must choose from predefined categories. With predefined choices, you don’t get unique insights, rather, statistically inferable data.

Survey Research Types based on Concept Testing

Monadic concept testing.

Monadic testing is a survey research methodology in which the respondents are split into multiple groups and ask each group questions about a separate concept in isolation. Generally, monadic surveys are hyper-focused on a particular concept and shorter in duration. The important thing in monadic surveys is to avoid getting off-topic or exhausting the respondents with too many questions.

Sequential Monadic Concept Testing

Another approach to monadic testing is sequential monadic testing. In sequential monadic surveys, groups of respondents are surveyed in isolation. However, instead of surveying three groups on three different concepts, the researchers survey the same groups of people on three distinct concepts one after another. In a sequential monadic survey, at least two topics are included (in random order), and the same questions are asked for each concept to eliminate bias.

Based on Data Source

Primary data.

Data obtained directly from the source or target population is referred to as primary survey data. When it comes to primary data collection, researchers usually devise a set of questions and invite people with knowledge of the subject to respond. The main sources of primary data are interviews, questionnaires, surveys, and observation methods.

 Compared to secondary data, primary data is gathered from first-hand sources and is more reliable. However, the process of primary data collection is both costly and time-consuming.

Secondary Data

Survey research is generally used to collect first-hand information from a respondent. However, surveys can also be designed to collect and process secondary data. It’s collected from third-party sources or primary sources in the past.

 This type of data is usually generic, readily available, and cheaper than primary data collection. Some common sources of secondary data are books, data collected from older surveys, online data, and data from government archives. Beware that you might compromise the validity of your findings if you end up with irrelevant or inflated data.

Based on Research Method

Quantitative research.

Quantitative research is a popular research methodology that is used to collect numeric data in a systematic investigation. It’s frequently used in research contexts where statistical data is required, such as sciences or social sciences. Quantitative research methods include polls, systematic observations, and face-to-face interviews.

Qualitative Research

Qualitative research is a research methodology where you collect non-numeric data from research participants. In this context, the participants are not restricted to a specific system and provide open-ended information. Some common qualitative research methods include focus groups, one-on-one interviews, observations, and case studies.

Based on Deployment Method

Online surveys.

With technology advancing rapidly, the most popular method of survey research is an online survey. With the internet, you can not only reach a broader audience but also design and customize a survey and deploy it from anywhere. Online surveys have outperformed offline survey methods as they are less expensive and allow researchers to easily collect and analyze data from a large sample.

Paper or Print Surveys

As the name suggests, paper or print surveys use the traditional paper and pencil approach to collect data. Before the invention of computers, paper surveys were the survey method of choice.

Though many would assume that surveys are no longer conducted on paper, it's still a reliable method of collecting information during field research and data collection. However, unlike online surveys, paper surveys are expensive and require extra human resources.

Telephonic Surveys

Telephonic surveys are conducted over telephones where a researcher asks a series of questions to the respondent on the other end. Contacting respondents over a telephone requires less effort, human resources, and is less expensive.

What makes telephonic surveys debatable is that people are often reluctant in giving information over a phone call. Additionally, the success of such surveys depends largely on whether people are willing to invest their time on a phone call answering questions.

One-on-one Surveys

One-on-one surveys also known as face-to-face surveys are interviews where the researcher and respondent. Interacting directly with the respondent introduces the human factor into the survey.

Face-to-face interviews are useful when the researcher wants to discuss something personal with the respondent. The response rates in such surveys are always higher as the interview is being conducted in person. However, these surveys are quite expensive and the success of these depends on the knowledge and experience of the researcher.

Based on Distribution

The easiest and most common way of conducting online surveys is sending out an email. Sending out surveys via emails has a higher response rate as your target audience already knows about your brand and is likely to engage.

Buy Survey Responses

Purchasing survey responses also yields higher responses as the responders signed up for the survey. Businesses often purchase survey samples to conduct extensive research. Here, the target audience is often pre-screened to check if they're qualified to take part in the research.

Embedding Survey on a Website

Embedding surveys on a website is another excellent way to collect information. It allows your website visitors to take part in a survey without ever leaving the website and can be done while a person is entering or exiting the website.

Post the Survey on Social Media

Social media is an excellent medium to reach abroad range of audiences. You can publish your survey as a link on social media and people who are following the brand can take part and answer questions.

Based on Frequency of Deployment

Cross-sectional studies.

Cross-sectional studies are administered to a small sample from a large population within a short period of time. This provides researchers a peek into what the respondents are thinking at a given time. The surveys are usually short, precise, and specific to a particular situation.

Longitudinal Surveys

Longitudinal surveys are an extension of cross-sectional studies where researchers make an observation and collect data over extended periods of time. This type of survey can be further divided into three types:

-       Trend surveys are employed to allow researchers to understand the change in the thought process of the respondents over some time.

-       Panel surveys are administered to the same group of people over multiple years. These are usually expensive and researchers must stick to their panel to gather unbiased opinions.

-       In cohort surveys, researchers identify a specific category of people and regularly survey them. Unlike panel surveys, the same people do not need to take part over the years, but each individual must fall into the researcher’s primary interest category.

Retrospective Survey

Retrospective surveys allow researchers to ask questions to gather data about past events and beliefs of the respondents. Since retrospective surveys also require years of data, they are similar to the longitudinal survey, except retrospective surveys are shorter and less expensive.

Why Should You Conduct Research Surveys?

“In God we trust. All others must bring data” - W. Edwards Deming

 In the information age, survey research is of utmost importance and essential for understanding the opinion of your target population. Whether you’re launching a new product or conducting a social survey, the tool can be used to collect specific information from a defined set of respondents. The data collected via surveys can be further used by organizations to make informed decisions.

Furthermore, compared to other research methods, surveys are relatively inexpensive even if you’re giving out incentives. Compared to the older methods such as telephonic or paper surveys, online surveys have a smaller cost and the number of responses is higher.

 What makes surveys useful is that they describe the characteristics of a large population. With a larger sample size , you can rely on getting more accurate results. However, you also need honest and open answers for accurate results. Since surveys are also anonymous and the responses remain confidential, respondents provide candid and accurate answers.

Common Uses of a Survey

Surveys are widely used in many sectors, but the most common uses of the survey research include:

-       Market research : surveying a potential market to understand customer needs, preferences, and market demand.

-       Customer Satisfaction: finding out your customer’s opinions about your services, products, or companies .

-       Social research: investigating the characteristics and experiences of various social groups.

-       Health research: collecting data about patients’ symptoms and treatments.

-       Politics: evaluating public opinion regarding policies and political parties.

-       Psychology: exploring personality traits, behaviors, and preferences.

6 Steps to Conduct Survey Research

An organization, person, or company conducts a survey when they need the information to make a decision but have insufficient data on hand. Following are six simple steps that can help you design a great survey.

Step 1: Objective of the Survey

The first step in survey research is defining an objective. The objective helps you define your target population and samples. The target population is the specific group of people you want to collect data from and since it’s rarely possible to survey the entire population, we target a specific sample from it. Defining a survey objective also benefits your respondents by helping them understand the reason behind the survey.

Step 2: Number of Questions

The number of questions or the size of the survey depends on the survey objective. However, it’s important to ensure that there are no redundant queries and the questions are in a logical order. Rephrased and repeated questions in a survey are almost as frustrating as in real life. For a higher completion rate, keep the questionnaire small so that the respondents stay engaged to the very end. The ideal length of an interview is less than 15 minutes. ( 2 )

Step 3: Language and Voice of Questions

While designing a survey, you may feel compelled to use fancy language. However, remember that difficult language is associated with higher survey dropout rates. You need to speak to the respondent in a clear, concise, and neutral manner, and ask simple questions. If your survey respondents are bilingual, then adding an option to translate your questions into another language can also prove beneficial.

Step 4: Type of Questions

In a survey, you can include any type of questions and even both closed-ended or open-ended questions. However, opt for the question types that are the easiest to understand for the respondents, and offer the most value. For example, compared to open-ended questions, people prefer to answer close-ended questions such as MCQs (multiple choice questions)and NPS (net promoter score) questions.

Step 5: User Experience

Designing a great survey is about more than just questions. A lot of researchers underestimate the importance of user experience and how it affects their response and completion rates. An inconsistent, difficult-to-navigate survey with technical errors and poor color choice is unappealing for the respondents. Make sure that your survey is easy to navigate for everyone and if you’re using rating scales, they remain consistent throughout the research study.

Additionally, don’t forget to design a good survey experience for both mobile and desktop users. According to Pew Research Center, nearly half of the smartphone users access the internet mainly from their mobile phones and 14 percent of American adults are smartphone-only internet users. ( 3 )

Step 6: Survey Logic

Last but not least, logic is another critical aspect of the survey design. If the survey logic is flawed, respondents may not continue in the right direction. Make sure to test the logic to ensure that selecting one answer leads to the next logical question instead of a series of unrelated queries.

How to Effectively Use Survey Research with Starlight Analytics

Designing and conducting a survey is almost as much science as it is an art. To craft great survey research, you need technical skills, consider the psychological elements, and have a broad understanding of marketing.

The ultimate goal of the survey is to ask the right questions in the right manner to acquire the right results.

Bringing a new product to the market is a long process and requires a lot of research and analysis. In your journey to gather information or ideas for your business, Starlight Analytics can be an excellent guide. Starlight Analytics' product concept testing helps you measure your product's market demand and refine product features and benefits so you can launch with confidence. The process starts with custom research to design the survey according to your needs, execute the survey, and deliver the key insights on time.

  • Survey research in the United States: roots and emergence, 1890-1960 https://searchworks.stanford.edu/view/10733873    
  • How to create a survey questionnaire that gets great responses https://luc.id/knowledgehub/how-to-create-a-survey-questionnaire-that-gets-great-responses/    
  • Internet/broadband fact sheet https://www.pewresearch.org/internet/fact-sheet/internet-broadband/    

Related Articles

What is a/b testing and how to do it effectively in 2022.

A/B testing is a technique that tests two versions of something in different groups. Learn more about making data-driven decisions with A/B testing.

Voice of Customer Analysis: How to Do it & Why It Matters

Learn how to use VoC analytics to set the foundations for future customer-centric strategies. Doing so can greatly improve customer retention & conversion.

The Growth Stage of Product Life Cycle | What to Know

All you need to know about the growth stage of the product life cycle and how you should make your product during this stage.

Price Testing 101: How to Do it The Right Way

Tired of playing the guessing game with your pricing strategy? Learn the 101 of price testing and how to do it the right way with Starlight Analytics.

Moments of Truth: Building Brand Loyalty Among Your Customers

Learn about the four discrete Moments of Truth and how they influence a customer’s perception of—and loyalty to—your brand.

How to Determine Market Potential of a Product (The 2022 Guide)

How do you determine the market potential, or demand for a product of service? Learn the basics and beyond from the experts at Starlight Analytics.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 15 April 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

Pollfish Resources

  • Pollfish School
  • Market Research
  • Survey Guides
  • Get started
  • Understanding the 3 Main Types of Survey Research & Putting Them to Use

Understanding the 3 Main Types of Survey Research & Putting Them to Use

what type of research study is a survey

Surveys establish a powerful primary source of market research. There are three main types of survey research; understanding these will not merely organize your survey studies, but help you form them from the onset of your research campaign.

It is crucial to be proficient in these types of survey research, as surveys should never be used as lone tools. A survey is a vehicle for granting insights, as part of a larger market research or other research campaigns. 

Understanding the three types of survey research will help you learn aspects within these forms that you were either not aware of or were not well-versed in.

This article explores the three main types of survey research and teaches you when to best implement each form of research. 

Putting the Types of Survey Research into Perspective 

With the presence of online surveys and other market research methods such as focus groups , there are ever-growing survey research methods . Before you choose a method, it is critical to decide on the type of survey research you need to conduct.

The type of survey research points to the kind of study you are going to apply in your campaign and all of its implications . The survey research type essentially hosts the research methods, which house the actual surveys . As such, the research type is one of the highest levels of the process, so consider it as a starting point in your research campaign.

Remember, that while there are various research types, the three presented in this article delineate the main types used in survey research. Researchers can apply these types to other research techniques (such as focus groups, interviews, etc.), but they are best suited for surveys.

Descriptive Research

The first main type of survey research is descriptive research. This type is centered on describing, as its name suggests, a topic of study. This can be a population, an occurrence or a phenomenon. 

Descriptive research is often the first type of research applied around a research issue, because it paints a picture of a topic, rather than investigating why it exists to begin with. 

The Key Aspects of Descriptive Research

The following provides the key attributes of descriptive research, so as to provide a full understanding of it.

  • Makes up the majority of online survey methods.
  • Concentrates on the what, when, where and how questions, rather than the why.
  • Lays out the particulars surrounding a research topic, but not its origin.
  • Handles quantitative studies.
  • Deemed conclusive due to its quantitative data.
  • Provides data that provides statistical inferences on a target population.
  • Preplanned and highly structured.
  • Aims to define an occurrence, attitude or opinions of the studied population.
  • Measures the significance of the results and formulates trends.
  • Can be used in cross-sectional and longitudinal surveys.

Survey Examples of Descriptive Research 

There are various types of surveys to use for descriptive research. In fact, you can apply virtually all of them if they meet the above requirements. Here are the major ones:

  • Descriptive surveys: These gather data about different subjects. They are set to find how different conditions can be gained by the subjects and the extent thereof. Ex: determining how qualified applicants are to a job are via a survey checking for this.
  • Descriptive-normative surveys: Much like descriptive surveys, but the results of the survey are compared with a norm. 
  • Descriptive analysis surveys: This survey describes a phenomenon via an analysis that divides the subject into 2 parts. Ex: analyzing employees with the same job role across geolocations. 
  • Correlative Survey: This determines whether the relationship between 2 variables is either positive or negative; sometimes it can be used to find neutrality. For example, if A and B have negative, positive or no correlation.

Exploratory Research 

what type of research study is a survey

Exploratory research is predicated on unearthing ideas and insights rather than amassing statistics. Also unlike descriptive research, exploratory research is not conclusive. This is because this research is conducted to obtain a better understanding of an existing phenomenon, one that has either not been studied thoroughly or is lacking some information.

Exploratory research is most apt to use at the beginning of a research campaign. In business, this kind of research is necessary for identifying issues within a company, opportunities for growth, adopting new procedures and deciding on which issues require statistical research, i.e., descriptive research. 

The Key Aspects of Exploratory Research

Also called interpretative research or grounded theory approach, the following provides the key attributes of exploratory research, including how it differs from descriptive research. 

  • Uses exploratory questions, which are intended to probe subjects in a qualitative manner.
  • Provides quality information that can uncover other unknown issues or solutions.
  • Is not meant to provide data that is statistically measurable. 
  • Used to get a familiarity with an existing problem by understanding its specifics.
  • Starts with a general idea with the outcomes of the research being used to find related issues with the research subject.
  • Typically exists within open-ended questions.  
  • Its process varies based on the new insights researchers gain and how they choose to go about them.
  • Usually asks for the what, how and most distinctively, the why.
  • Due to the absence of past research on the subject, exploratory research is time-consuming,
  • Not structured and flexible.

Examples of Exploratory Research

Since exploratory research is not structured and often scattered, it can exist within a multitude of survey types. For example, it can be used in an employee feedback survey, a cross-sectional survey and virtually any other that allows you to ask questions on the why and employs open-ended questions. 

Here are a few other ways to conduct exploratory research:

  • Case studies: They help researchers analyze existing cases that deal with a similar phenomenon. This method often involves secondary research , unless your business or organization has case studies on a similar topic. Perhaps one of your competitors offers one as well. With case studies, the researcher needs to study all the variables in the case study in relation to their own. 
  • Field Observations: This method is best suited for researchers who deal with their subjects in physical environments, for example, those studying customers in a store or patients in a clinic. It can also be applied by studying digital behaviors using a session replay tool. 
  • Focus Groups: This involves a group of people, typically 6-10 coming together and speaking with the researcher, as opposed to having a one on one conversation with the researcher. Participants are chosen to provide insights on the topic of study and express it with other members of the focus group, while the researcher observes and acts as a moderator. 
  • Interviews : Interviews can be conducted in person or over the phone. Researchers have the option of interviewing their target market, their overall target population, or subject matter experts. The latter will provide significant and professional-grade insights, the kind that non-experts typically can’t offer. 

Causal Research

what type of research study is a survey

The final type of survey research is causal research, which, much like descriptive research is structured, preplanned and draws quantitative insights. Also called explanatory research, causal research aims to discover whether there is any causality between the relationships of variables. 

As such, focuses primarily on cause-and-effect relationships. In this regard, it stands in opposition with descriptive research, which is far broader. Causal research has only two objects:

  • Understand which variable are the cause and which are the effect
  • Decipher the workings of the relationship between the causal variables, including how they will hammer out the effect.

The Key Aspects of Causal Research

The following provides the key traits of causal research, including how it differs from descriptive and exploratory research. 

  • Considered conclusive research due to its structured design, preplanning and quantitative nature. 
  • Its two objectives make this research type more scientific than exploratory and descriptive research. 
  • Focuses on observing the variations in variables suspected as causing the changes in other variables.
  • Measure changes in both the suspected causal variables and the ones they affect.
  • Variables suspected of being causal are isolated and tested to meet the aforesaid two objectives.
  • For example, an advertisement or a sales promotion
  • Requires setting objectives, preplanning parameters, and identifying potential causal variables and affected variables to reduce researcher bias. 
  • Requires accounting for all the possible causal factors that may be affecting the supposed affected variable, i.e., there can’t be any outside (non-accounted) variables.
  • All confounding variables that can affect the results have to be kept consistent and controlled to make sure no hidden variable is in any way influencing the relationship between two variables. 
  • To deem a cause and effect relationship, the cause would have needed to precede the effect.  

Examples of Causal Research

Causal research depends on the most scientific method out of the three types of survey research. Given that it requires experimentation, a vast amount of surveys can be conducted on the variables to determine if they are causal, non-causal or the ones being affected.

Here are a few examples of use causal research

  • Product testing: Particularly useful if it’s a new product to test market demand and sales capacity. 
  • Advertising Improvements: Researchers can study buying behaviors to see if there is any causality between ads and how much people buy or if the advertised products reach higher sales. The outcomes of this research can help marketers tweak their ad campaigns, discard them altogether or even consider product updates.
  • Increase customer retention : This can be conducted in different manners, such as via in-store experimentations, via digital shopping or through different surveys. These experiments will help you understand what current customers prefer and what repels them. 
  • Community Needs : Local governments can conduct the community survey to discover opinions surrounding community issues. For example, researchers can test whether certain local laws, transportation availability and authorizations are well or poorly received and if they correlate with certain happenings.

Deciding on Which of the Types of Research to Conduct

Market researchers and marketers often have several aspects of their discipline that would benefit off of conducting these three types of survey research. What’s most empowering about these types of survey research is that they are not limited to surveys alone.

Instead, they bolster the idea that surveys should not be used as lone tools. Rather, survey research powers an abundance of other market research methods and campaigns. As such, researchers should set aside surveys after they’ve decided on high-level campaigns and their needs.

As such, consider the core of what you need to study. Can your survey be applied to a macro-application? For example, in the business sector, this can be marketing, branding, advertising, etc.

Next, does your study require a methodical approach? For example, does it need to focus on one period of time among one population? If so, you will need to conduct a cross-sectional survey. 

Or does it require to be conducted over some period of time? This will require implementing a longitudinal study. Once you figure out these components, you should move on to choosing the type of survey research you’re going to conduct. However, you can also decide on this before you choose one of the methodical methods. 

Whichever route you decide to take, you’ll need a strong online survey provider, as this does, after all, involve surveys. The correct online survey platform will set your research up for success.  

Frequently asked questions

Why is it important to understand the types of survey research.

The type of survey research informs the kind of study you’ll be conducting. It becomes the backbone of your campaign and all its implications. Basically, the types of survey research host their designated research methods, which house the surveys. Therefore, the types of survey research you decide on are at the highest level of the research process and act as your starting point.

What is exploratory research?

Exploratory research is the most preliminary form of research, establishing the foundation of a research process. focuses on unearthing ideas and insights rather than gathering statistics. It’s not a conclusive form of research-- rather, it is conducted to bolster understanding of a specific phenomenon. It is typically the first form of research, setting the foundation for a research campaign.

What is descriptive research?

Descriptive research focuses on describing a topic of study like a population, an occurrence or a phenomenon. It is performed early on in the overall research process, as it paints an overall picture of a topic, while extracting the key details that you wouldn’t find with exploratory research alone.

What is a cross-sectional survey?

A cross-sectional survey is a survey used to gather research about a particular population at a specific point in time. It is considered to be the snapshot of a studied population.

What is causal research?

Causal research is typically performed in the latter stages of the entire research process, following correlational or descriptive research. It is conducted to find the causality between variables. It involves more than merely observing, as it relies on experiments and the manipulation of variables

How can you decide which types of survey research to conduct?

Take a look at the core of what you need to study. Are you trying to focus on one period of time among a population? Does your survey research need to be conducted over a period of time? Questions like these will lead you to the right research type.

Do you want to distribute your survey? Pollfish offers you access to millions of targeted consumers to get survey responses from $0.95 per complete. Launch your survey today.

Privacy Preference Center

Privacy preferences.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

what type of research study is a survey

Home Surveys

Types of Survey: What It Is with Examples

types of survey

Technically, a  survey is a method of gathering and compiling information from a group of people, more often known as the sample, to gain knowledge by organizations, businesses, or institutions. This information or opinion collected from the sample is more often a generalization of what a large population thinks.

Different types of survey helps provide important or critical information in the form of meaningful data, which is further used by businesses or organizations to make informed and sound decisions. The collected data offers good insights only when the administered questionnaire is carefully designed to promote response rates and includes both open-ended questions and closed-ended questions and answers options. There is much variety when it comes to surveys, and we can identify their types based on the frequency of their administration or the way of deployment.

LEARN ABOUT:  Testimonial Questions

Types of Survey

Now that we know what a survey is and why do we need to survey people, let’s explore its types. These can be classified in different ways, as mentioned earlier, depending upon the frequency of administration or deployment and how the distribution/deployment occurs. There are other types of surveys like random sample surveys (to understand public opinion or attitude) and self-selected type of studies.

LEARN ABOUT: Candidate Experience Survey

Types of a survey based on deployment methods:

1. online surveys:.

One of the most popular types is an online survey . With technology advancing many folds with each passing day, an online survey is becoming more popular. This survey consists of  survey questions that can be easily deployed to the respondents online via email, or they can access the survey if they have an internet connection. These surveys are easy to design and simple to deploy. Respondents are given ample time and space to the respondent to answer these surveys, so researchers can expect unbiased responses. They are less expensive, and data can be collected and analyzed quickly.

LEARN ABOUT: Event Surveys

2. Paper surveys:

As the name suggests, this survey uses the traditional paper and pencil approach. Many would believe that paper surveys are a thing of the past. However, they are quite handy when it comes to field research and data collection. These surveys can go where computers, laptops or other handheld devices cannot go.

There is a flip side to it too. This survey type is the most expensive method of data collection. It includes deploying a large number of human resources, along with time and money.

LEARN ABOUT: course evaluation survey examples

3. Telephonic Surveys:

Researchers conduct these over telephones. Respondents need to answer questions related to the research topic by the researcher. These surveys are time-consuming and sometimes non-conclusive. The success of these depends on how many people answer the phone and want to invest their time answering questions over the telephone.

4. One-to-One interviews:

The one-to-one interview helps researchers gather information or data directly from a respondent. It’s a qualitative research method  and depends on the knowledge and experience of a researcher to frame and ask relevant questions one after the other to collect meaningful insights from the  interview . These interviews can last from 30 minutes up to a few hours.

Types of a survey based on the frequency of deployment

1. cross-sectional studies.

These surveys are administered to a small sample from a larger population within a small time frame. This type offers a researcher a quick summary or analysis of what respondents think at that given time. These surveys are short and ready to answer and can measure opinion in one particular situation.

Consider hypothetically, an organization conducts a study related to breast cancer in America, and they choose a sample to obtain cross-sectional data. This data indicated that breast cancer was most prevalent in women of African-American origin. The information is from one point in time. Now, if the researcher wants to dwell more in-depth into the research, he/she can deploy a longitudinal survey.

Learn more: Cross-sectional Study

2. Longitudinal surveys:

Longitudinal surveys are those surveys that help researchers to make an observation and collect data over an extended period. There are three main types of longitudinal studies: trend surveys, panel surveys, and cohort surveys.

Trend surveys are deployed by researchers to understand the shift or transformation in the thought process of respondents over some time. They use these surveys to understand how people’s inclination change with time.

Another longitudinal survey type is  a panel survey . Researchers administer these surveys to the same set or group of people over the years. Panel surveys are expensive in nature, and researchers try to stick to their panel to gather unbiased opinions.

The third type of longitudinal survey is the cohort survey. In this type, categories of people that meet specific similar criteria and characteristics form the target audience. The same people don’t need to create a group. However, people forming a group should have certain similarities.

Learn more: Longitudinal Study

3. Retrospective survey:

A retrospective survey is a type of study in which respondents answer questions to report on events from the past. By deploying this kind of survey, researchers can gather data based on past experiences and beliefs of people. This way, unlike a longitudinal survey, they can save the cost and time required.

Learn more: Cross-sectional vs Longitudinal Study

Random public opinion/attitude type of survey research:

When an agency needs reliable, projectable data about the attitudes and opinions of its citizens or a select group of its citizens, it is essential to conduct a valid, random sample survey. Telephone interview surveys are considerably more common than in-person interviews because they are far less expensive to administer and act as a standard tool for gathering information.

There is a margin of error based on the sample size (generally, a minimum population sample of 200 is the industry standard for reliable data about any population segment). Overall, random sample telephone interview surveys provide reasonably accurate information about the population.

While there is a statistical  margin of error (the sample of 200 provides an error range of +/- 7% with a 95% confidence), this type of survey is the most democratic and reliable process for learning about the opinions of an entire community.

A random sample survey is inappropriate for educating people about an issue or assessing what people will do at some future point (i.e., “Will you vote for this bond issue?”). But, the results provide a reasonably accurate portrait of the person’s opinions in the present moment (i.e., a person’s feelings or attitudes about the issues relating to the need to approve a bond). Questions in the past and present tense provide a reasonable degree of accuracy about a person’s usage and habit patterns.

If you are trying to calculate the ideal margin of error for your research, you can use tools like our margin of error calculator .

LEARN ABOUT: telephone survey

Self-selected type of survey research – Newspapers, mail, Internet, written questionnaires:

When an agency has a political need to create a survey process that allows anyone interested in responding, it can do a self-selected process. A written survey can be distributed in public locations, such as the City Hall or Library, emailed directly, emailed, or published in the city newsletter or the local newspaper.

When reporting data from a self-selected survey, it is essential, to begin with, the understanding and the language, “Of those who chose to respond…..” Most often, those who volunteer to respond to a self-selected survey have a strong opinion (frequently negative) about the issue in question.

A self-selected survey can be an excellent public relations tool and the right way to inform the public. But, it’s crucial to be cautious in drawing any conclusion about what the public, in general, thinks based on the results of a survey when the respondents are volunteers.

Learn more: Research Design

Types of surveys with examples

A researcher must have a proper medium to conduct research and collect meaningful information to make informed decisions. Also, it is essential to have a platform to create and deploy these various types of market research surveys.

LEARN ABOUT: Top 12 Tips to Create A Good Survey

QuestionPro is a platform that helps not only to create but also to deploy different types of surveys. We have 350+ types of survey templates and survey examples, including:

  • Customer survey templates: Customers are crucial to success for any business or organization, and so are customer satisfaction surveys. It is essential for organizations or companies to understand their customers and what their needs and preferences are. Use the customer survey template to understand your customers better and work on their feedback to grow and flourish your business.
  • Market research & Marketing survey templates : Use marketing survey templates for market research to determine what consumers think about products or services. These are also helpful for a brand to assess whether products are reasonably priced, gather feedback from consumers, measure their level of awareness, and more.
  • Community survey templates : Community survey templates can be administered to members of associations or foundations to get feedback regarding the various activities conducted within the association. This helps understand the member’s experiences and collect feedback regarding what kind of programs add value, feedback of previously held events, etc. and more.
  • Human Resource survey templates : The human resource survey template can be used by businesses and organizations for employee evaluation, employee satisfaction, employee engagement, and more. Organizations can send these out to employees, and their feedback can be collected and implemented.

LEARN ABOUT:   Workforce Planning Model

  • Industrial survey templates : Expertly designed survey templates that are customized for the different industries help to collect in-depth feedback or information from consumers of various industries like event management, hotel industry , fast food industry, transportation, just to name a few. Through these survey templates, the industry player can understand what good they are already doing and what needs more attention from a consumer’s point of view.
  • Academic survey templates : Academic survey templates are one of the best ways to understand how students and their parents respond to the efforts taken by your education institution. A online questionnaire designed by industry experts helps to assess the parent/student feedback on a course evaluation, curriculum planning, training sessions, etc.
  • Nonprofit survey templates : These Nonprofit survey templates are designed by domain experts to collect targeted information and feedback from various donors, volunteers, stakeholders, and any other participants of a nonprofit’s activities. The questionnaires address various important touchpoints and collect data from event attendees, collect donor survey feedback, or run an internal survey among volunteers.

LEARN ABOUT: best time to send out surveys

One can choose from these existing survey format templates or create a survey of their own, all this just at the click of a button.

Explore: 350+ Free survey templates

MORE LIKE THIS

employee lifecycle management software

Employee Lifecycle Management Software: Top of 2024

Apr 15, 2024

Sentiment analysis software

Top 15 Sentiment Analysis Software That Should Be on Your List

A/B testing software

Top 13 A/B Testing Software for Optimizing Your Website

Apr 12, 2024

contact center experience software

21 Best Contact Center Experience Software in 2024

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Form Publisher Blog

Understanding the 3 Main Types of Research Surveys

Share this article:

Surveys play a vital role in collecting essential information. If you understand the different types of research surveys, you’ll be able to collect more meaningful data.

In this blog post, we’ll walk you through the fundamentals of three main survey types: exploratory, descriptive, and causal. We’ll give you insights into their distinct purposes, methodologies, and the unique benefits they offer.

Let's dive into the world of research surveys!

What are the three types of surveying?

In order to create surveys, you’re going to need a form creator. Google Forms is a simple tool, but if you need help creating one, check out our guide on creating a Google Form !

Exploratory surveys

This type of survey aims to explore a topic or problem broadly in an introductory manner. It’s conducted when there’s limited or no existing knowledge of the subject at hand.

Exploratory research aims to generate insights, ideas, and hypotheses rather than quantifying data or making conclusive statements. For this reason, open-ended questions are best for this kind of research.

Some key characteristics of this kind of research are:

  • Flexible and Unstructured : It’s open-ended and flexible. It allows you to adapt your approach, methods, and questions as your gather data and gain insights.
  • Qualitative Data: It generates qualitative data, such as descriptions, opinions, and perspectives. The data gathered can’t be quantified, meaning it can’t be measured.
  • Small and Diverse Samples: These involve relatively small and diverse samples. The diversity of the sample pool can allow you to gather a range of perspectives and experiences regarding an area or topic.
  • Hypothesis Generation: It helps generate a hypothesis rather than testing that hypothesis.
  • In-depth and Detailed Analysis: It involves thorough analysis and exploration of a topic. The surveyor delves into the collected information, identifies recurring themes, and extracts meaningful insights to inform future research directions.

Benefits of exploratory surveys

With exploratory research, you can gain a deeper understanding of a relatively unexplored or poorly understood problem. You can generate new ideas and hypotheses while identifying areas that can be explored further.

Additionally, exploratory research allows you to gather diverse perspectives and identify the variables of a problem. This can help problem-solving by establishing relationships and patterns between variables.

Exploratory question examples

Exploratory surveys answer broad questions like “What factors influence consumers' decision-making when choosing a brand?” or “What are the key factors that drive brand loyalty among our existing customer base?”

Some open-ended questions that may be used to gain insights into these questions can be:

  • Can you describe the factors influencing your decision when choosing a product or service in [specific industry]?
  • Are there any particular emotions or feelings that play a role in your decision-making process?
  • What role does brand reputation or trust play in your decision-making process?

Descriptive surveys

Rather than addressing a topic broadly, descriptive surveys aim to describe a topic or area in more detail. The primary purpose of descriptive surveys is to describe a particular phenomenon or group comprehensively.

To do that, these surveys typically use structured questionnaires with closed-ended questions to collect quantitative and conclusive data. The data collected is then analyzed and summarized using statistical measures such as frequencies, percentages, averages, or correlations.

Descriptive surveys are widely used in various fields, such as market research, social sciences, healthcare, and more. Some characteristics of descriptive surveys are:

  • Objectives: Here, the aim is to describe and capture information about problems, behaviors, opinions, or attitudes of a sample.
  • Quantitative Data: With the help of structured questionnaires based on closed-ended questions, you collect qualitative data that can be measured.
  • Large Sample Sizes: You need a larger sample size to ensure that the data collected is representative of the sample.
  • Statistical Analysis: The collected data is analyzed using statistical techniques to summarize the key findings.
  • Representative Samples: You have to ensure the survey sample represents the target population as closely as possible so that your results are reliable and generally applicable.

Benefits of descriptive surveys

Unlike exploratory surveys, descriptive surveys ask closed-ended questions. This lets you gather information quickly and efficiently, giving you a clear picture of what's happening. As you use structured questionnaires with set answer options, it becomes easier to analyze and summarize the data as well. If you use Advanced Summary for Google Forms, this is even easier!

These surveys help you reach conclusions that can inform decision-making. Since the sample size is large and objective data is drawn, you can have confidence in the reliability of the results. Further, you can also be sure that the result is representative of the sample.

Descriptive question examples

This is the most widely found type of survey online. Customer satisfaction surveys, employee engagement surveys, demographic surveys, market research surveys, and event feedback surveys are all descriptive surveys that can have closed-ended questions on them like:

  • Did our customer service team adequately resolve your issue? (Yes/No)
  • Do you feel that your superiors value your opinions and ideas? (Yes/No)
  • What is your age range? (18-24 / 25-34 / 35-44 / 45-54 / 55 and above)
  • Overall, how satisfied were you with the event? (Very dissatisfied - Very satisfied)

Since most questions on a descriptive survey are closed-ended, it becomes crucial that responses are accurate and error-free. Sending respondents a copy of their responses is a great way to ensure the veracity of responses. This also ensures that respondents can alter their responses if an error occurs.

If you’re using Google Forms, you know that sending response summaries through Google Forms is possible. However, another way to take your response summaries to another level is to try Form Publisher with Google Forms.

Causal surveys

Causal surveys are research studies exploring cause-and-effect relationships between variables. They aim to determine whether changes in one variable directly influence another variable.

In a descriptive survey, factors aren’t manipulated, only recorded for description. But in a causal survey, specific factors are manipulated to see their effect on the overall outcome.

Some characteristics include:

  • Experimental Design: Causal surveys often involve experimental designs, where participants are assigned to different groups.
  • Manipulation of Variables: Independent variables are intentionally manipulated to observe their impact on a dependent variable.
  • Control Group: These surveys typically include a control group that doesn’t receive the manipulated variable. It serves as a baseline for comparison.
  • Randomization: To ensure unbiased results, participants in causal surveys are often randomly assigned to different groups.
  • Quantitative Data Analysis: Causal surveys gather quantitative data, which is then statistically analyzed.
  • Replication: To establish reliable findings, causal surveys sometimes have to be replicated with different samples or settings.

Benefits of causal surveys

These surveys enable researchers to draw conclusions about cause and effect, providing valuable insights into the influencing factors in various situations.

Causal question examples

Causal research can help establish cause and effect in such questions as:

  • What is the effect of elevation on VO2 max in an individual?
  • What is the effect of changing packaging on items sold?
  • What is the effect of using children of [age] in advertising for [product]?

Organize your survey results better with Form Publisher!

There you have it! The three main types of research surveys and what they can help you achieve. If you’re conducting an online survey, you’re undoubtedly familiar with Google Forms, the most intuitive and efficient platform for creating forms and surveys .To further enhance your Google Forms experience, consider using Form Publisher to organize your survey process. Form Publisher is a Google Forms add-on that can create individual documents out of responses for you and also send personalized response summaries. Explore Form Publisher today!

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Mapping the global geography of cybercrime with the World Cybercrime Index

Roles Data curation, Formal analysis, Investigation, Methodology, Visualization, Writing – original draft

* E-mail: [email protected]

Affiliations Department of Sociology, University of Oxford, Oxford, United Kingdom, Canberra School of Professional Studies, University of New South Wales, Canberra, Australia

ORCID logo

Roles Conceptualization, Investigation, Methodology, Writing – original draft

Affiliations Department of Sociology, University of Oxford, Oxford, United Kingdom, Oxford School of Global and Area Studies, University of Oxford, Oxford, United Kingdom

Roles Formal analysis, Methodology, Writing – review & editing

Affiliations Department of Sociology, University of Oxford, Oxford, United Kingdom, Leverhulme Centre for Demographic Science, University of Oxford, Oxford, United Kingdom

Roles Funding acquisition, Methodology, Writing – review & editing

Affiliation Department of Software Systems and Cybersecurity, Faculty of IT, Monash University, Victoria, Australia

Roles Conceptualization, Funding acquisition, Methodology, Writing – review & editing

Affiliation Centre d’études européennes et de politique comparée, Sciences Po, Paris, France

  • Miranda Bruce, 
  • Jonathan Lusthaus, 
  • Ridhi Kashyap, 
  • Nigel Phair, 
  • Federico Varese

PLOS

  • Published: April 10, 2024
  • https://doi.org/10.1371/journal.pone.0297312
  • Peer Review
  • Reader Comments

Table 1

Cybercrime is a major challenge facing the world, with estimated costs ranging from the hundreds of millions to the trillions. Despite the threat it poses, cybercrime is somewhat an invisible phenomenon. In carrying out their virtual attacks, offenders often mask their physical locations by hiding behind online nicknames and technical protections. This means technical data are not well suited to establishing the true location of offenders and scholarly knowledge of cybercrime geography is limited. This paper proposes a solution: an expert survey. From March to October 2021 we invited leading experts in cybercrime intelligence/investigations from across the world to participate in an anonymized online survey on the geographical location of cybercrime offenders. The survey asked participants to consider five major categories of cybercrime, nominate the countries that they consider to be the most significant sources of each of these types of cybercrimes, and then rank each nominated country according to the impact, professionalism, and technical skill of its offenders. The outcome of the survey is the World Cybercrime Index, a global metric of cybercriminality organised around five types of cybercrime. The results indicate that a relatively small number of countries house the greatest cybercriminal threats. These findings partially remove the veil of anonymity around cybercriminal offenders, may aid law enforcement and policymakers in fighting this threat, and contribute to the study of cybercrime as a local phenomenon.

Citation: Bruce M, Lusthaus J, Kashyap R, Phair N, Varese F (2024) Mapping the global geography of cybercrime with the World Cybercrime Index. PLoS ONE 19(4): e0297312. https://doi.org/10.1371/journal.pone.0297312

Editor: Naeem Jan, Korea National University of Transportation, REPUBLIC OF KOREA

Received: October 11, 2023; Accepted: January 3, 2024; Published: April 10, 2024

Copyright: © 2024 Bruce et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: The dataset and relevant documents have been uploaded to the Open Science Framework. Data can be accessed via the following URL: https://osf.io/5s72x/?view_only=ea7ee238f3084054a6433fbab43dc9fb .

Funding: This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant agreement No. 101020598 – CRIMGOV, Federico Varese PI). FV received the award and is the Primary Investigator. The ERC did not play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. Funder website: https://erc.europa.eu/faq-programme/h2020 .

Competing interests: The authors have declared that no competing interests exist.

Introduction

Although the geography of cybercrime attacks has been documented, the geography of cybercrime offenders–and the corresponding level of “cybercriminality” present within each country–is largely unknown. A number of scholars have noted that valid and reliable data on offender geography are sparse [ 1 – 4 ], and there are several significant obstacles to establishing a robust metric of cybercriminality by country. First, there are the general challenges associated with the study of any hidden population, for whom no sampling frame exists [ 5 , 6 ]. If cybercriminals themselves cannot be easily accessed or reliably surveyed, then cybercriminality must be measured through a proxy. This is the second major obstacle: deciding what kind of proxy data would produce the most valid measure of cybercriminality. While there is much technical data on cybercrime attacks, this data captures artefacts of the digital infrastructure or proxy (obfuscation) services used by cybercriminals, rather than their true physical location. Non-technical data, such as legal cases, can provide geographical attribution for a small number of cases, but the data are not representative of global cybercrime. In short, the question of how best to measure the geography of cybercriminal offenders is complex and unresolved.

There is tremendous value in developing a metric for cybercrime. Cybercrime is a major challenge facing the world, with the most sober cost estimates in the hundreds of millions [ 7 , 8 ], but with high-end estimates in the trillions [ 9 ]. By accurately identifying which countries are cybercrime hotspots, the public and private sectors could concentrate their resources on these hotspots and spend less time and funds on cybercrime countermeasures in countries where the problem is limited. Whichever strategies are deployed in the fight against cybercrime (see for example [ 10 – 12 ]), they should be targeted at countries that produce the largest cybercriminal threat [ 3 ]. A measure of cybercriminality would also enable other lines of scholarly inquiry. For instance, an index of cybercriminality by country would allow for a genuine dependent variable to be deployed in studies attempting to assess which national characteristics–such as educational attainment, Internet penetration, or GDP–are associated with cybercrime [ 4 , 13 ]. These associations could also be used to identify future cybercrime hubs so that early interventions could be made in at-risk countries before a serious cybercrime problem develops. Finally, this metric would speak directly to theoretical debates on the locality of cybercrime, and organized crime more generally [ 11 – 14 ]. The challenge we have accepted is to develop a metric that is both global and robust. The following sections respectively outline the background elements of this study, the methods, the results, and then discussion and limitations.

Profit-driven cybercrime, which is the focus of this paper/research, has been studied by both social scientists and computer scientists. It has been characterised by empirical contributions that have sought to illuminate the nature and organisation of cybercrime both online and offline [ 15 – 20 ]. But, as noted above, the geography of cybercrime has only been addressed by a handful of scholars, and they have identified a number of challenges connected to existing data. In a review of existing work in this area, Lusthaus et al. [ 2 ] identify two flaws in existing cybercrime metrics: 1) their ability to correctly attribute the location of cybercrime offenders; 2) beyond a handful of examples, their ability to compare the severity and scale of cybercrime between countries.

Building attribution into a cybercrime index is challenging. Often using technical data, cybersecurity firms, law enforcement agencies and international organisations regularly publish reports that identify the major sources of cyber attacks (see for example [ 21 – 24 ]). Some of these sources have been aggregated by scholars (see [ 20 , 25 – 29 ]). But the kind of technical data contained in these reports cannot accurately measure offender location. Kigerl [ 1 ] provides some illustrative remarks:

Where the cybercriminals live is not necessarily where the cyberattacks are coming from. An offender from Romania can control zombies in a botnet, mostly located in the United States, from which to send spam to countries all over the world, with links contained in them to phishing sites located in China. The cybercriminal’s reach is not limited by national borders (p. 473).

As cybercriminals often employ proxy services to hide their IP addresses, carry out attacks across national boundaries, collaborate with partners around the world, and can draw on infrastructure based in different countries, superficial measures do not capture the true geographical distribution of these offenders. Lusthaus et al. [ 2 ] conclude that attempts to produce an index of cybercrime by country using technical data suffer from a problem of validity. “If they are a measure of anything”, they argue, “they are a measure of cyber-attack geography”, not of the geography of offenders themselves (p. 452).

Non-technical data are far better suited to incorporating attribution. Court records, indictments and other investigatory materials speak more directly to the identification of offenders and provide more granular detail on their location. But while this type of data is well matched to micro-level analysis and case studies, there are fundamental questions about the representativeness of these small samples, even if collated. First, any sample would capture cases only where cybercriminals had been prosecuted, and would not include offenders that remain at large. Second, if the aim was to count the number of cybercrime prosecutions by country, this may reflect the seriousness with which various countries take cybercrime law enforcement or the resources they have to pursue it, rather than the actual level of cybercrime within each country (for a discussion see [ 30 , 31 ]). Given such concerns, legal data is also not an appropriate approach for such a research program.

Furthermore, to carry out serious study on this topic, a cybercrime metric should aim to include as many countries as possible, and the sample must allow for variation so that high and low cybercrime countries can be compared. If only a handful of widely known cybercrime hubs are studied, this will result in selection on the dependent variable. The obvious challenge in providing such a comparative scale is the lack of good quality data to devise it. As an illustration, in their literature review Hall et al. [ 10 ] identify the “dearth of robust data” on the geographical location of cybercriminals, which means they are only able to include six countries in their final analysis (p. 285. See also [ 4 , 32 , 33 ]).

Considering the weaknesses within both existing technical and legal data discussed above, Lusthaus et al. [ 2 ] argue for the use of an expert survey to establish a global metric of cybercriminality. Expert survey data “can be extrapolated and operationalised”, and “attribution can remain a key part of the survey, as long as the participants in the sample have an extensive knowledge of cybercriminals and their operations” (p. 453). Up to this point, no such study has been produced. Such a survey would need to be very carefully designed for the resulting data to be both reliable and valid. One criticism of past cybercrime research is that surveys were used whenever other data was not immediately available, and that they were not always designed with care (for a discussion see [ 34 ]).

In response to the preceding considerations, we designed an expert survey in 2020, refined it through focus groups, and deployed it throughout 2021. The survey asked participants to consider five major types of cybercrime– Technical products/services ; Attacks and extortion ; Data/identity theft ; Scams ; and Cashing out/money laundering –and nominate the countries that they consider to be the most significant sources of each of these cybercrime types. Participants then rated each nominated country according to the impact of the offenses produced there, and the professionalism and technical skill of the offenders based there. Using the expert responses, we generated scores for each type of cybercrime, which we then combined into an overall metric of cybercriminality by country: the World Cybercrime Index (WCI). The WCI achieves our initial goal to devise a valid measure of cybercrime hub location and significance, and is the first step in our broader aim to understand the local dimensions of cybercrime production across the world.

Participants

Identifying and recruiting cybercrime experts is challenging. Much like the hidden population of cybercriminals we were trying to study, cybercrime experts themselves are also something of a hidden population. Due to the nature of their work, professionals working in the field of cybercrime tend to be particularly wary of unsolicited communication. There is also the problem of determining who is a true cybercrime expert, and who is simply presenting themselves as one. We designed a multi-layered sampling method to address such challenges.

The heart of our strategy involved purposive sampling. For an index based entirely on expert opinion, ensuring the quality of these experts (and thereby the quality of our survey results) was of the utmost importance. We defined “expertise” as adult professionals who have been engaged in cybercrime intelligence, investigation, and/or attribution for a minimum of five years and had a reputation for excellence amongst their peers. Only currently- or recently-practicing intelligence officers and investigators were included in the participant pool. While participants could be from either the public or private sectors, we explicitly excluded professionals working in the field of cybercrime research who are not actively involved in tracking offenders, which includes writers and academics. In short, only experts with first-hand knowledge of cybercriminals are included in our sample. To ensure we had the leading experts from a wide range of backgrounds and geographical areas, we adopted two approaches for recruitment. We searched extensively through a range of online sources including social media (e.g. LinkedIn), corporate sites, news articles and cybercrime conference programs to identify individuals who met our inclusion criteria. We then faced a second challenge of having to find or discern contact information for these individuals.

Complementing this strategy, the authors also used their existing relationships with recognised cybercrime experts to recruit participants using the “snowball” method [ 35 ]. This both enhanced access and provided a mechanism for those we knew were bona fide experts to recommend other bona fide experts. The majority of our participants were recruited in this manner, either directly through our initial contacts or through a series of referrals that followed. But it is important to note that this snowball sampling fell under our broader purposive sampling strategy. That is, all the original “seeds” had to meet our inclusion criteria of being a top expert in the first instance. Any connections we were offered also had to meet our criteria or we would not invite them to participate. Another important aspect of this sampling strategy is that we did not rely on only one gatekeeper, but numerous, often unrelated, individuals who helped us with introductions. This approach reduced bias in the sample. It was particularly important to deploy a number of different “snowballs” to ensure that we included experts from each region of the world (Africa, Asia Pacific, Europe, North America and South America) and from a range of relevant professional backgrounds. We limited our sampling strategy to English speakers. The survey itself was likewise written in English. The use of English was partly driven by the resources available for this study, but the population of cybercrime experts is itself very global, with many attending international conferences and cooperating with colleagues from across the world. English is widely spoken within this community. While we expect the gains to be limited, future surveys will be translated into some additional languages (e.g. Spanish and Chinese) to accommodate any non-English speaking experts that we may not otherwise be able to reach.

Our survey design, detailed below, received ethics approval from the Human Research Advisory Panel (HREAP A) at the University of New South Wales in Australia, approval number HC200488, and the Research Ethics Committee of the Department of Sociology (DREC) at the University of Oxford in the United Kingdom, approval number SOC_R2_001_C1A_20_23. Participants were recruited in waves between 1 August 2020 and 30 September 2021. All participants provided consent to participate in the focus groups, pilot survey, and final survey.

Survey design

The survey comprised three stages. First, we conducted three focus groups with seven experts in cybercrime intelligence/investigations to evaluate our initial assumptions, concepts, and framework. These experts were recruited because they had reputations as some of the very top experts in the field; they represented a range of backgrounds in terms of their own geographical locations and expertise across different types of cybercrime; and they spanned both the public and private sectors. In short, they offered a cross-section of the survey sample we aimed to recruit. These focus groups informed several refinements to the survey design and specific terms to make them better comprehensible to participants. Some of the key terms, such as “professionalism” and “impact”, were a direct result of this process. Second, some participants from the focus groups then completed a pilot version of the survey, alongside others who had not taken part in these focus groups, who could offer a fresh perspective. This allowed us to test technical components, survey questions, and user experience. The pilot participants provided useful feedback and prompted a further refinement of our approach. The final survey was released online in March 2021 and closed in October 2021. We implemented several elements to ensure data quality, including a series of preceding statements about time expectations, attention checks, and visual cues throughout the survey. These elements significantly increased the likelihood that our participants were both suitable and would provide full and thoughtful responses.

The introduction to the survey outlined the survey’s two main purposes: to identify which countries are the most significant sources of profit-driven cybercrime, and to determine how impactful the cybercrime is in these locations. Participants were reminded that state-based actors and offenders driven primarily by personal interests (for instance, cyberbullying or harassment) should be excluded from their consideration. We defined the “source” of cybercrime as the country where offenders are primarily based, rather than their nationality. To maintain a level of consistency, we made the decision to only include countries formally recognised by the United Nations. We initially developed seven categories of cybercrime to be included in the survey, based on existing research. But during the focus groups and pilot survey, our experts converged on five categories as the most significant cybercrime threats on a global scale:

  • Technical products/services (e.g. malware coding, botnet access, access to compromised systems, tool production).
  • Attacks and extortion (e.g. DDoS attacks, ransomware).
  • Data/identity theft (e.g. hacking, phishing, account compromises, credit card comprises).
  • Scams (e.g. advance fee fraud, business email compromise, online auction fraud).
  • Cashing out/money laundering (e.g. credit card fraud, money mules, illicit virtual currency platforms).

After being prompted with these descriptions and a series of images of world maps to ensure participants considered a wide range of regions/countries, participants were asked to nominate up to five countries that they believed were the most significant sources of each of these types of cybercrime. Countries could be listed in any order; participants were not instructed to rank them. Nominating countries was optional and participants were free to skip entire categories if they wished. Participants were then asked to rate each of the countries they nominated against three measures: how impactful the cybercrime is, how professional the cybercrime offenders are, and how technically skilled the cybercrime offenders are. Across each of these three measures, participants were asked to assign scores on a Likert-type scale between 1 (e.g. least professional) to 10 (e.g. most professional). Nominating and then rating countries was repeated for all five cybercrime categories.

This process, of nominating and then rating countries across each category, introduces a potential limitation in the survey design: the possibility of survey response fatigue. If a participant nominated the maximum number of countries across each cybercrime category– 25 countries–by the end of the survey they would have completed 75 Likert-type scales. The repetition of this task, paired with the consideration that it requires, has the potential to introduce respondent fatigue as the survey progresses, in the form of response attrition, an increase in careless responses, and/or increased likelihood of significantly higher/lower scores given. This is a common phenomenon in long-form surveys [ 36 ], and especially online surveys [ 37 , 38 ]. Jeong et al [ 39 ], for instance, found that questions asked near the end of a 2.5 hour survey were 10–64% more likely to be skipped than those at the beginning. We designed the survey carefully, refined with the aid of focus groups and a pilot, to ensure that only the most essential questions were asked. As such, the survey was not overly long (estimated to take 30 minutes). To accommodate any cognitive load, participants were allowed to complete the survey anytime within a two-week window. Their progress was saved after each session, which enabled participants to take breaks between completing each section (a suggestion made by Jeong et al [ 39 ]). Crucially, throughout survey recruitment, participants were informed that the survey is time-intensive and required significant attention. At the beginning of the survey, participants were instructed not to undertake the survey unless they could allocate 30 minutes to it. This approach pre-empted survey fatigue by discouraging those likely to lose interest from participating. This compounds the fact that only experts with a specific/strong interest in the subject matter of the survey were invited to participate. Survey fatigue is addressed further in the Discussion section, where we provide an analysis suggesting little evidence of participant fatigue.

In sum, we designed the survey to protect against various sources of bias and error, and there are encouraging signs that the effects of these issues in the data are limited (see Discussion ). Yet expert surveys are inherently prone to some types of bias and response issues; in the WCI, the issue of selection and self-selection within our pool of experts, as well as geo-political biases that may lead to systematic over- or under-scoring of certain countries, is something we considered closely. We discuss these issues in detail in the subsection on Limitations below.

what type of research study is a survey

This “type” score is then multiplied by the proportion of experts who nominated that country. Within each cybercrime type, a country could be nominated a possible total of 92 times–once per participant. We then multiply this weighted score by ten to produce a continuous scale out of 100 (see Eq (2) ). This process prevents countries that received high scores, but a low number of nominations, from receiving artificially high rankings.

what type of research study is a survey

The analyses for this paper were performed in R. All data and code have been made publicly available so that our analysis can be reproduced and extended.

We contacted 245 individuals to participate in the survey, of which 147 agreed and were sent invitation links to participate. Out of these 147, a total of 92 people completed the survey, giving us an overall response rate of 37.5%. Given the expert nature of the sample, this is a high response rate (for a detailed discussion see [ 40 ]), and one just below what Wu, Zhao, and Fils-Aime estimate of response rates for general online surveys in social science: 44% [ 41 ]. The survey collected information on the participants’ primary nationality and their current country of residence. Four participants chose not to identify their nationality. Overall, participants represented all five major geopolitical regions (Africa, the Asia-Pacific, Europe, North America and South America), both in nationality and residence, though the distribution was uneven and concentrated in particular regions/countries. There were 8 participants from Africa, 11 participants from the Asia Pacific, 27 from North America, and 39 from Europe. South America was the least represented region with only 3 participants. A full breakdown of participants’ nationality, residence, and areas of expertise is included in the Supporting Information document (see S1 Appendix ).

Table 1 shows the scores for the top fifteen countries of the WCI overall index. Each entry shows the country, along with the mean score (out of 10) averaged across the participants who nominated this country, for three categories: impact, professionalism, and technical skill. This is followed by each country’s WCI overall and WCI type scores. Countries are ordered by their WCI overall score. Each country’s highest WCI type scores are highlighted. Full indices that include all 197 UN-recognised countries can be found in S1 Indices .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0297312.t001

Some initial patterns can be observed from this table, as well as the full indices in the supplementary document (see S1 Indices ). First, a small number of countries hold consistently high ranks for cybercrime. Six countries–China, Russia, Ukraine, the US, Romania, and Nigeria–appear in the top 10 of every WCI type index, including the WCI overall index. Aside from Romania, all appear in the top three at least once. While appearing in a different order, the first ten countries in the Technical products/services and Attacks and extortion indices are the same. Second, despite this small list of countries regularly appearing as cybercrime hubs, the survey results capture a broad geographical diversity. All five geopolitical regions are represented across each type. Overall, 97 distinct countries were nominated by at least one expert. This can be broken down into the cybercrime categories. Technical products/services includes 41 different countries; Attacks and extortion 43; Data/identity theft 51; Scams 49; and Cashing out/money laundering 63.

Some key findings emerge from these results, which are further illustrated by the following Figs 1 and 2 . First, cybercrime is not universally distributed. Certain countries are cybercrime hubs, while many others are not associated with cybercriminality in a serious way. Second, countries that are cybercrime hubs specialise in particular types of cybercrime. That is, despite a small number of countries being leading producers of cybercrime, there is meaningful variation between them both across categories, and in relation to scores for impact, professionalism and technical skill. Third, the results show a longer list of cybercrime-producing countries than are usually included in publications on the geography of cybercrime. As the survey captures leading producers of cybercrime, rather than just any country where cybercrime is present, this suggests that, even if a small number of countries are of serious concern, and close to 100 are of little concern at all, the remaining half are of at least moderate concern.

thumbnail

Base map and data from OpenStreetMap and OpenStreetMap Foundation.

https://doi.org/10.1371/journal.pone.0297312.g001

thumbnail

https://doi.org/10.1371/journal.pone.0297312.g002

To examine further the second finding concerning hub specialisation, we calculated an overall “Technicality score”–or “T-score”–for the top 15 countries of the WCI overall index. We assigned a value from 2 to -2 to each type of cybercrime to designate the level of technical complexity involved. Technical products/services is the most technically complex type (2), followed by Attacks and extortion (1), Data/identity theft (0), Scams (-1), and finally Cashing out and money laundering (-2), which has very low technical complexity. We then multiplied each country’s WCI score for each cybercrime type by its assigned value–for instance, a Scams WCI score of 5 would be multiplied by -1, with a final modified score of -5. As a final step, for each country, we added all of their modified WCI scores across all five categories together to generate the T-score. Fig 3 plots the top 15 WCI overall countries’ T-scores, ordering them by score. Countries with negative T-scores are highlighted in red, and countries with positive scores are in black.

thumbnail

Negative values correspond to lower technicality, positive values to higher technicality.

https://doi.org/10.1371/journal.pone.0297312.g003

The T-score is best suited to characterising a given hub’s specialisation. For instance, as the line graph makes clear, Russia and Ukraine are highly technical cybercrime hubs, whereas Nigerian cybercriminals are engaged in less technical forms of cybercrime. But for countries that lie close to the centre (0), the story is more complex. Some may specialise in cybercrime types with middling technical complexity (e.g. Data/identity theft ). Others may specialise in both high- and low-tech crimes. In this sample of countries, India (-6.02) somewhat specialises in Scams but is otherwise a balanced hub, whereas Romania (10.41) and the USA (-2.62) specialise in both technical and non-technical crimes, balancing their scores towards zero. In short, each country has a distinct profile, indicating a unique local dimension.

This paper introduces a global and robust metric of cybercriminality–the World Cybercrime Index. The WCI moves past previous technical measures of cyber attack geography to establish a more focused measure of the geography of cybercrime offenders. Elicited through an expert survey, the WCI shows that cybercrime is not universally distributed. The key theoretical contribution of this index is to illustrate that cybercrime, often seen as a fluid and global type of organized crime, actually has a strong local dimension (in keeping with broader arguments by some scholars, such as [ 14 , 42 ]).

While we took a number of steps to ensure our sample of experts was geographically representative, the sample is skewed towards some regions (such as Europe) and some countries (such as the US). This may simply reflect the high concentration of leading cybercrime experts in these locations. But it is also possible this distribution reflects other factors, including the authors’ own social networks; the concentration of cybercrime taskforces and organisations in particular countries; the visibility of different nations on networking platforms like LinkedIn; and also perhaps norms of enthusiasm or suspicion towards foreign research projects, both inside particular organisations and between nations.

To better understand what biases might have influenced the survey data, we analysed participant rating behaviours with a series of linear regressions. Numerical ratings were the response and different participant characteristics–country of nationality; country of residence; crime type expertise; and regional expertise–were the predictors. Our analysis found evidence (p < 0.05) that participants assigned higher ratings to the countr(ies) they either reside in or are citizens of, though this was not a strong or consistent result. For instance, regional experts did not consistently rate their region of expertise more highly than other regions. European and North American experts, for example, rated countries from these regions lower than countries from other regions. Our analysis of cybercrime type expertise showed even less systematic rating behaviour, with no regression yielding a statistically significant (p < 0.05) result. Small sample sizes across other known participant characteristics meant that further analyses of rating behaviour could not be performed. This applied to, for instance, whether residents and citizens of the top ten countries in the WCI nominated their own countries more or less often than other experts. On this point: 46% of participants nominated their own country at some point in the survey, but the majority (83%) of nominations were for a country different to the participant’s own country of residence or nationality. This suggested limited bias towards nominating one’s own country. Overall, these analyses point to an encouraging observation: while there is a slight home-country bias, this does not systematically result in higher rating behaviour. Longitudinal data from future surveys, as well as a larger participant pool, will better clarify what other biases may affect rating behaviour.

There is little evidence to suggest that survey fatigue affected our data. As the survey progressed, the heterogeneity of nominated countries across all experts increased, from 41 different countries nominated in the first category to 63 different countries nominated in the final category. If fatigue played a significant role in the results then we would expect this number to decrease, as participants were not required to nominate countries within a category and would have been motivated to nominate fewer countries to avoid extending their survey time. We further investigated the data for evidence of survey fatigue in two additional ways: by performing a Mann-Kendall/Sen’s slope trend test (MK/S) to determine whether scores skewed significantly upwards or downwards towards the end of the survey; and by compiling an intra-individual response variability (IRV) index to search for long strings of repeated scores at the end of the survey [ 43 ]. The MK/S test was marginally statistically significant (p<0.048), but the results indicated that scores trended downwards only minimally (-0.002 slope coefficient). Likewise, while the IRV index uncovered a small group of participants (n = 5) who repeatedly inserted the same score, this behaviour was not more likely to happen at the end of the survey (see S7 and S8 Tables in S1 Appendix ).

It is encouraging that there is at least some external validation for the WCI’s highest ranked countries. Steenbergen and Marks [ 44 ] recommend that data produced from expert judgements should “demonstrate convergent validity with other measures of [the topic]–that is, the experts should provide evaluations of the same […] phenomenon that other measurement instruments pick up.” (p. 359) Most studies of the global cybercrime geography are, as noted in the introduction, based on technical measures that cannot accurately establish the true physical location of offenders (for example [ 1 , 4 , 28 , 33 , 45 ]). Comparing our results to these studies would therefore be of little value, as the phenomena being measured differs: they are measuring attack infrastructure, whereas the WCI measures offender location. Instead, looking at in-depth qualitative cybercrime case studies would provide a better comparison, at least for the small number of higher ranked countries. Though few such studies into profit-driven cybercrime exist, and the number of countries included are limited, we can see that the top ranked countries in the WCI match the key cybercrime producing countries discussed in the qualitative literature (see for example [ 3 , 10 , 32 , 46 – 50 ]). Beyond this qualitative support, our sampling strategy–discussed in the Methods section above–is our most robust control for ensuring the validity of our data.

Along with contributing to theoretical debates on the (local) nature of organized crime [ 1 , 14 ], this index can also contribute to policy discussions. For instance, there is an ongoing debate as to the best approaches to take in cybercrime reduction, whether this involves improving cyber-law enforcement capacity [ 3 , 51 ], increasing legitimate job opportunities and access to youth programs for potential offenders [ 52 , 53 ], strengthening international agreements and law harmonization [ 54 – 56 ], developing more sophisticated and culturally-specific social engineering countermeasures [ 57 ], or reducing corruption [ 3 , 58 ]. As demonstrated by the geographical, economic, and political diversity of the top 15 countries (see Table 1 ), the likelihood that a single strategy will work in all cases is low. If cybercrime is driven by local factors, then mitigating it may require a localised approach that considers the different features of cybercrime in these contexts. But no matter what strategies are applied in the fight against cybercrime, they should be targeted at the countries that produce the most cybercrime, or at least produce the most impactful forms of it [ 3 ]. An index is a valuable resource for determining these countries and directing resources appropriately. Future research that explains what is driving cybercrime in these locations might also suggest more appropriate means for tackling the problem. Such an analysis could examine relevant correlates, such as corruption, law enforcement capacity, internet penetration, education levels and so on to inform/test a theoretically-driven model of what drives cybercrime production in some locations, but not others. It also might be possible to make a kind of prediction: to identify those nations that have not yet emerged as cybercrime hubs but may in the future. This would allow an early warning system of sorts for policymakers seeking to prevent cybercrime around the world.

Limitations

In addition to the points discussed above, the findings of the WCI should be considered in light of some remaining limitations. Firstly, as noted in the methods, our pool of experts was not as large or as globally representative as we had hoped. Achieving a significant response rate is a common issue across all surveys, and is especially difficult in those that employ the snowball technique [ 59 ] and also attempt to recruit experts [ 60 ]. However, ensuring that our survey data captures the most accurate picture of cybercrime activity is an essential aspect of the project, and the under-representation of experts from Africa and South America is noteworthy. More generally, our sample size (n = 92) is relatively small. Future iterations of the WCI survey should focus on recruiting a larger pool of experts, especially those from under-represented regions. However, this is a small and hard-to-reach population, which likely means the sample size will not grow significantly. While this limits statistical power, it is also a strength of the survey: by ensuring that we only recruit the top cybercrime experts in the world, the weight and validity of our data increases.

Secondly, though we developed our cybercrime types and measures with expert focus groups, the definitions used in the WCI will always be contestable. For instance, a small number of comments left at the end of the survey indicated that the Cashing out/money laundering category was unclear to some participants, who were unsure whether they should nominate the country in which these schemes are organised or the countries in which the actual cash out occurs. A small number of participants also commented that they were not sure whether the ‘impact’ of a country’s cybercrime output should be measured in terms of cost, social change, or some other metric. We limited any such uncertainties by running a series of focus groups to check that our categories were accurate to the cybercrime reality and comprehensible to practitioners in this area. We also ran a pilot version of the survey. The beginning of the survey described the WCI’s purpose and terms of reference, and participants were able to download a document that described the project’s methodology in further detail. Each time a participant was prompted to nominate countries as a significant source of a type of cybercrime, the type was re-defined and examples of offences under that type were provided. However, the examples were not exhaustive and the definitions were brief. This was done partly to avoid significantly lengthening the survey with detailed definitions and clarifications. We also wanted to avoid over-defining the cybercrime types so that any new techniques or attack types that emerged while the survey ran would be included in the data. Nonetheless, there will always remain some elasticity around participant interpretations of the survey.

Finally, although we restricted the WCI to profit-driven activity, the distinction between cybercrime that is financially-motivated, and cybercrime that is motivated by other interests, is sometimes blurred. Offenders who typically commit profit-driven offences may also engage in state-sponsored activities. Some of the countries with high rankings within the WCI may shelter profit-driven cybercriminals who are protected by corrupt state actors of various kinds, or who have other kinds of relationships with the state. Actors in these countries may operate under the (implicit or explicit) sanctioning of local police or government officials to engage in cybercrime. Thus while the WCI excludes state-based attacks, it may include profit-driven cybercriminals who are protected by states. Investigating the intersection between profit-driven cybercrime and the state is a strong focus in our ongoing and future research. If we continue to see evidence that these activities can overlap (see for example [ 32 , 61 – 63 ]), then any models explaining the drivers of cybercrime will need to address this increasingly important aspect of local cybercrime hubs.

This study makes use of an expert survey to better measure the geography of profit-driven cybercrime and presents the output of this effort: the World Cybercrime Index. This index, organised around five major categories of cybercrime, sheds light on the geographical concentrations of financially-motivated cybercrime offenders. The findings reveal that a select few countries pose the most significant cybercriminal threat. By illustrating that hubs often specialise in particular forms of cybercrime, the WCI also offers valuable insights into the local dimension of cybercrime. This study provides a foundation for devising a theoretically-driven model to explain why some countries produce more cybercrime than others. By contributing to a deeper understanding of cybercrime as a localised phenomenon, the WCI may help lift the veil of anonymity that protects cybercriminals and thereby enhance global efforts to combat this evolving threat.

Supporting information

S1 indices. wci indices..

Full indices for the WCI Overall and each WCI Type.

https://doi.org/10.1371/journal.pone.0297312.s001

S1 Appendix. Supporting information.

Details of respondent characteristics and analysis of rating behaviour.

https://doi.org/10.1371/journal.pone.0297312.s002

Acknowledgments

The data collection for this project was carried out as part of a partnership between the Department of Sociology, University of Oxford and UNSW Canberra Cyber. The analysis and writing phases received support from CRIMGOV. Fig 1 was generated using information from OpenStreetMap and OpenStreetMap Foundation, which is made available under the Open Database License.

  • View Article
  • Google Scholar
  • 2. Lusthaus J, Bruce M, Phair N. Mapping the geography of cybercrime: A review of indices of digital offending by country. 2020.
  • 4. McCombie S, Pieprzyk J, Watters P. Cybercrime Attribution: An Eastern European Case Study. Proceedings of the 7th Australian Digital Forensics Conference. Perth, Australia: secAU—Security Research Centre, Edith Cowan University; 2009. pp. 41–51. https://researchers.mq.edu.au/en/publications/cybercrime-attribution-an-eastern-european-case-study
  • 7. Anderson R, Barton C, Bohme R, Clayton R, van Eeten M, Levi M, et al. Measuring the cost of cybercrime. The Economics of Information Security and Privacy. Springer; 2013. pp. 265–300. https://link.springer.com/chapter/10.1007/978-3-642-39498-0_12
  • 8. Anderson R, Barton C, Bohme R, Clayton R, Ganan C, Grasso T, et al. Measuring the Changing Cost of Cybercrime. California, USA; 2017.
  • 9. Morgan S. 2022 Official Cybercrime Report. Cybersecurity Ventures; 2022. https://s3.ca-central-1.amazonaws.com/esentire-dot-com-assets/assets/resourcefiles/2022-Official-Cybercrime-Report.pdf
  • 12. Wall D. Cybercrime: The Transformation of Crime in the Information Age. Polity Press; 2007.
  • 14. Varese F. Mafias on the move: how organized crime conquers new territories. Princeton University Press; 2011.
  • 15. Dupont B. Skills and Trust: A Tour Inside the Hard Drives of Computer Hackers. Crime and networks. Routledge; 2013.
  • 16. Franklin J, Paxson V, Savage S. An Inquiry into the Nature and Causes of the Wealth of Internet Miscreants. Proceedings of the 2007 ACM Conference on Computer and Communications Security. Alexandria, Virginia, USA; 2007.
  • 17. Hutchings A, Clayton R. Configuring Zeus: A case study of online crime target selection and knowledge transmission. Scottsdale, AZ, USA: IEEE; 2017.
  • 20. Levesque F, Fernandez J, Somayaji A, Batchelder. National-level risk assessment: A multi-country study of malware infections. 2016. https://homeostasis.scs.carleton.ca/~soma/pubs/levesque-weis2016.pdf
  • 21. Crowdstrike. 2022 Global Threat Report. Crowdstrike; 2022. https://go.crowdstrike.com/crowdstrike/gtr
  • 22. EC3. Internet Organised Crime Threat Assessment (IOCTA) 2021. EC3; 2021. https://www.europol.europa.eu/publications-events/main-reports/internet-organised-crime-threat-assessment-iocta-2021
  • 23. ENISA. ENISA threat Landscape 2021. ENISA; 2021. https://www.enisa.europa.eu/publications/enisa-threat-landscape-2021
  • 24. Sophos. Sophos 2022 Threat Report. Sophos; 2022. https://www.sophos.com/ en-us/labs/security-threat-report
  • 25. van Eeten M, Bauer J, Asghari H, Tabatabaie S, Rand D. The Role of Internet Service Providers in Botnet Mitigation. An Empirical Analysis Based on Spam Data WEIS. 2010. van Eeten, Michel and Bauer, Johannes M. and Asghari, Hadi and Tabatabaie, Shirin and Rand, David, The Role of Internet Service Providers in Botnet Mitigation an Empirical Analysis Based on Spam Data (August 15, 2010). TPRC 2010, SSRN: https://ssrn.com/abstract=1989198
  • 26. He S, Lee GM, Quarterman JS, Whinston A. Cybersecurity Policies Design and Evaluation: Evidence from a Large-Scale Randomized Field Experiment. 2015. https://econinfosec.org/archive/weis2015/papers/WEIS_2015_he.pdf
  • 27. Snyder P, Kanich C. No Please, After You: Detecting Fraud in Affiliate Marketing Networks. 2015. https://econinfosec.org/archive/weis2015/papers/WEIS_2015_snyder.pdf
  • 29. Wang Q-H, Kim S-H. Cyber Attacks: Cross-Country Interdependence and Enforcement. 2009. http://weis09.infosecon.net/files/153/paper153.pdf
  • 32. Lusthaus J. Industry of Anonymity: Inside the Business of Cybercrime. Harvard University Press; 2018.
  • 33. Kshetri N. The Global Cybercrime Industry: Economic, Institutional and Strategic Perspectives. Berlin: Springer; 2010.
  • 36. Backor K, Golde S, Nie N. Estimating Survey Fatigue in Time Use Study. Washington, DC.; 2007. https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=401f97f2d7c684b295486636d8a84c627eb33446
  • 42. Reuter P. Disorganized Crime: Illegal Markets and the Mafia. MIT Press; 1985.
  • PubMed/NCBI
  • 47. Sotande E. Transnational Organised Crime and Illicit Financial Flows: Nigeria, West Africa and the Global North. University of Leeds, School of Law. 2016. https://etheses.whiterose.ac.uk/15473/1/Emmanuel%20Sotande%20Thessis%20at%20the%20University%20of%20Leeds.%20viva%20corrected%20version%20%281%29.pdf
  • 48. Lusthaus J. Modelling cybercrime development: the case of Vietnam. The Human Factor of Cybercrime. Routledge; 2020. pp. 240–257.
  • 51. Lusthaus J. Electronic Ghosts. In: Democracy: A Journal of Ideas [Internet]. 2014. https://democracyjournal.org/author/jlusthaus/
  • 52. Brewer R, de Vel-Palumbo M, Hutchings A, Maimon D. Positive Diversions. Cybercrime Prevention. 2019. https://www.researchgate.net/publication/337297392_Positive_Diversions
  • 53. National Cyber Crime Unit / Prevent Team. Pathways Into Cyber Crime. National Crime Agency; 2017. https://www.nationalcrimeagency.gov.uk/who-we-are/publications/6-pathways-into-cyber-crime-1/file
  • 60. Christopoulos D. Peer Esteem Snowballing: A methodology for expert surveys. 2009. https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=63ac9f6db0a2fa2e0ca08cd28961385f98ec21ec

Global cyber attack around the world with planet Earth viewed from space and internet network communication under cyberattack portrayed with red icons of an unlocked padlock.

World-first “Cybercrime Index” ranks countries by cybercrime threat level

Following three years of intensive research, an international team of researchers have compiled the first ever ‘World Cybercrime Index’, which identifies the globe’s key cybercrime hotspots by ranking the most significant sources of cybercrime at a national level.

The Index, published today in the journal PLOS ONE , shows that a relatively small number of countries house the greatest cybercriminal threat. Russia tops the list, followed by Ukraine, China, the USA, Nigeria, and Romania. The UK comes in at number eight.

A white woman with long brown hair standing in front of a hedge. A white man wearing a check shirt standing in front of a bookcase.

‘The research that underpins the Index will help remove the veil of anonymity around cybercriminal offenders, and we hope that it will aid the fight against the growing threat of profit-driven cybercrime,’ Dr Bruce said.

‘We now have a deeper understanding of the geography of cybercrime, and how different countries specialise in different types of cybercrime.’

‘By continuing to collect this data, we’ll be able to monitor the emergence of any new hotspots and it is possible early interventions could be made in at-risk countries before a serious cybercrime problem even develops.’

The data that underpins the Index was gathered through a survey of 92 leading cybercrime experts from around the world who are involved in cybercrime intelligence gathering and investigations. The survey asked the experts to consider five major categories of cybercrime*, nominate the countries that they consider to be the most significant sources of each of these types of cybercrime, and then rank each country according to the impact, professionalism, and technical skill of its cybercriminals.

List of countries with their World Cybercrime Index score. The top ten countries are Russia, Ukraine, China, the US, Nigeria, Romania, North Korea, UK, Brazil and India.

Co-author Associate Professor Jonathan Lusthaus , from the University of Oxford’s Department of Sociology and Oxford School of Global and Area Studies, said cybercrime has largely been an invisible phenomenon because offenders often mask their physical locations by hiding behind fake profiles and technical protections.

'Due to the illicit and anonymous nature of their activities, cybercriminals cannot be easily accessed or reliably surveyed. They are actively hiding. If you try to use technical data to map their location, you will also fail, as cybercriminals bounce their attacks around internet infrastructure across the world. The best means we have to draw a picture of where these offenders are actually located is to survey those whose job it is to track these people,' Dr Lusthaus said.

Figuring out why some countries are cybercrime hotspots, and others aren't, is the next stage of the research. There are existing theories about why some countries have become hubs of cybercriminal activity - for example, that a technically skilled workforce with few employment opportunities may turn to illicit activity to make ends meet - which we'll be able to test against our global data set. Dr Miranda Bruce  Department of Sociology, University of Oxford and UNSW Canberra   

Co-author of the study, Professor Federico Varese from Sciences Po in France, said the World Cybercrime Index is the first step in a broader aim to understand the local dimensions of cybercrime production across the world.

‘We are hoping to expand the study so that we can determine whether national characteristics like educational attainment, internet penetration, GDP, or levels of corruption are associated with cybercrime. Many people think that cybercrime is global and fluid, but this study supports the view that, much like forms of organised crime, it is embedded within particular contexts,’ Professor Varese said.

The World Cybercrime Index has been developed as a joint partnership between the University of Oxford and UNSW and has also been funded by CRIMGOV , a European Union-supported project based at the University of Oxford and Sciences Po. The other co-authors of the study include Professor Ridhi Kashyap from the University of Oxford and Professor Nigel Phair from Monash University.

The study ‘Mapping the global geography of cybercrime with the World Cybercrime Index’ has been published in the journal PLOS ONE .

*The five major categories of cybercrime assessed by the study were:

1.   Technical products/services (e.g. malware coding, botnet access, access to compromised systems, tool production).

2.   Attacks and extortion (e.g. denial-of-service attacks, ransomware).

3.   Data/identity theft (e.g. hacking, phishing, account compromises, credit card comprises).

4.   Scams (e.g. advance fee fraud, business email compromise, online auction fraud).

5.   Cashing out/money laundering (e.g. credit card fraud, money mules, illicit virtual currency platforms).

Subscribe to News

DISCOVER MORE

  • Support Oxford's research
  • Partner with Oxford on research
  • Study at Oxford
  • Research jobs at Oxford

You can view all news or browse by category

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Questionnaire Design | Methods, Question Types & Examples

Questionnaire Design | Methods, Question Types & Examples

Published on July 15, 2021 by Pritha Bhandari . Revised on June 22, 2023.

A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information.

Questionnaires are commonly used in market research as well as in the social and health sciences. For example, a company may ask for feedback about a recent customer service experience, or psychology researchers may investigate health risk perceptions using questionnaires.

Table of contents

Questionnaires vs. surveys, questionnaire methods, open-ended vs. closed-ended questions, question wording, question order, step-by-step guide to design, other interesting articles, frequently asked questions about questionnaire design.

A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.

Designing a questionnaire means creating valid and reliable questions that address your research objectives , placing them in a useful order, and selecting an appropriate method for administration.

But designing a questionnaire is only one component of survey research. Survey research also involves defining the population you’re interested in, choosing an appropriate sampling method , administering questionnaires, data cleansing and analysis, and interpretation.

Sampling is important in survey research because you’ll often aim to generalize your results to the population. Gather data from a sample that represents the range of views in the population for externally valid results. There will always be some differences between the population and the sample, but minimizing these will help you avoid several types of research bias , including sampling bias , ascertainment bias , and undercoverage bias .

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

what type of research study is a survey

Questionnaires can be self-administered or researcher-administered . Self-administered questionnaires are more common because they are easy to implement and inexpensive, but researcher-administered questionnaires allow deeper insights.

Self-administered questionnaires

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Self-administered questionnaires can be:

  • cost-effective
  • easy to administer for small and large groups
  • anonymous and suitable for sensitive topics

But they may also be:

  • unsuitable for people with limited literacy or verbal skills
  • susceptible to a nonresponse bias (most people invited may not complete the questionnaire)
  • biased towards people who volunteer because impersonal survey requests often go ignored.

Researcher-administered questionnaires

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents.

Researcher-administered questionnaires can:

  • help you ensure the respondents are representative of your target audience
  • allow clarifications of ambiguous or unclear questions and answers
  • have high response rates because it’s harder to refuse an interview when personal attention is given to respondents

But researcher-administered questionnaires can be limiting in terms of resources. They are:

  • costly and time-consuming to perform
  • more difficult to analyze if you have qualitative responses
  • likely to contain experimenter bias or demand characteristics
  • likely to encourage social desirability bias in responses because of a lack of anonymity

Your questionnaire can include open-ended or closed-ended questions or a combination of both.

Using closed-ended questions limits your responses, while open-ended questions enable a broad range of answers. You’ll need to balance these considerations with your available time and resources.

Closed-ended questions

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. Closed-ended questions are best for collecting data on categorical or quantitative variables.

Categorical variables can be nominal or ordinal. Quantitative variables can be interval or ratio. Understanding the type of variable and level of measurement means you can perform appropriate statistical analyses for generalizable results.

Examples of closed-ended questions for different variables

Nominal variables include categories that can’t be ranked, such as race or ethnicity. This includes binary or dichotomous categories.

It’s best to include categories that cover all possible answers and are mutually exclusive. There should be no overlap between response items.

In binary or dichotomous questions, you’ll give respondents only two options to choose from.

White Black or African American American Indian or Alaska Native Asian Native Hawaiian or Other Pacific Islander

Ordinal variables include categories that can be ranked. Consider how wide or narrow a range you’ll include in your response items, and their relevance to your respondents.

Likert scale questions collect ordinal data using rating scales with 5 or 7 points.

When you have four or more Likert-type questions, you can treat the composite data as quantitative data on an interval scale . Intelligence tests, psychological scales, and personality inventories use multiple Likert-type questions to collect interval data.

With interval or ratio scales , you can apply strong statistical hypothesis tests to address your research aims.

Pros and cons of closed-ended questions

Well-designed closed-ended questions are easy to understand and can be answered quickly. However, you might still miss important answers that are relevant to respondents. An incomplete set of response items may force some respondents to pick the closest alternative to their true answer. These types of questions may also miss out on valuable detail.

To solve these problems, you can make questions partially closed-ended, and include an open-ended option where respondents can fill in their own answer.

Open-ended questions

Open-ended, or long-form, questions allow respondents to give answers in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered. For example, respondents may want to answer “multiracial” for the question on race rather than selecting from a restricted list.

  • How do you feel about open science?
  • How would you describe your personality?
  • In your opinion, what is the biggest obstacle for productivity in remote work?

Open-ended questions have a few downsides.

They require more time and effort from respondents, which may deter them from completing the questionnaire.

For researchers, understanding and summarizing responses to these questions can take a lot of time and resources. You’ll need to develop a systematic coding scheme to categorize answers, and you may also need to involve other researchers in data analysis for high reliability .

Question wording can influence your respondents’ answers, especially if the language is unclear, ambiguous, or biased. Good questions need to be understood by all respondents in the same way ( reliable ) and measure exactly what you’re interested in ( valid ).

Use clear language

You should design questions with your target audience in mind. Consider their familiarity with your questionnaire topics and language and tailor your questions to them.

For readability and clarity, avoid jargon or overly complex language. Don’t use double negatives because they can be harder to understand.

Use balanced framing

Respondents often answer in different ways depending on the question framing. Positive frames are interpreted as more neutral than negative frames and may encourage more socially desirable answers.

Use a mix of both positive and negative frames to avoid research bias , and ensure that your question wording is balanced wherever possible.

Unbalanced questions focus on only one side of an argument. Respondents may be less likely to oppose the question if it is framed in a particular direction. It’s best practice to provide a counter argument within the question as well.

Avoid leading questions

Leading questions guide respondents towards answering in specific ways, even if that’s not how they truly feel, by explicitly or implicitly providing them with extra information.

It’s best to keep your questions short and specific to your topic of interest.

  • The average daily work commute in the US takes 54.2 minutes and costs $29 per day. Since 2020, working from home has saved many employees time and money. Do you favor flexible work-from-home policies even after it’s safe to return to offices?
  • Experts agree that a well-balanced diet provides sufficient vitamins and minerals, and multivitamins and supplements are not necessary or effective. Do you agree or disagree that multivitamins are helpful for balanced nutrition?

Keep your questions focused

Ask about only one idea at a time and avoid double-barreled questions. Double-barreled questions ask about more than one item at a time, which can confuse respondents.

This question could be difficult to answer for respondents who feel strongly about the right to clean drinking water but not high-speed internet. They might only answer about the topic they feel passionate about or provide a neutral answer instead – but neither of these options capture their true answers.

Instead, you should ask two separate questions to gauge respondents’ opinions.

Strongly Agree Agree Undecided Disagree Strongly Disagree

Do you agree or disagree that the government should be responsible for providing high-speed internet to everyone?

Prevent plagiarism. Run a free check.

You can organize the questions logically, with a clear progression from simple to complex. Alternatively, you can randomize the question order between respondents.

Logical flow

Using a logical flow to your question order means starting with simple questions, such as behavioral or opinion questions, and ending with more complex, sensitive, or controversial questions.

The question order that you use can significantly affect the responses by priming them in specific directions. Question order effects, or context effects, occur when earlier questions influence the responses to later questions, reducing the validity of your questionnaire.

While demographic questions are usually unaffected by order effects, questions about opinions and attitudes are more susceptible to them.

  • How knowledgeable are you about Joe Biden’s executive orders in his first 100 days?
  • Are you satisfied or dissatisfied with the way Joe Biden is managing the economy?
  • Do you approve or disapprove of the way Joe Biden is handling his job as president?

It’s important to minimize order effects because they can be a source of systematic error or bias in your study.

Randomization

Randomization involves presenting individual respondents with the same questionnaire but with different question orders.

When you use randomization, order effects will be minimized in your dataset. But a randomized order may also make it harder for respondents to process your questionnaire. Some questions may need more cognitive effort, while others are easier to answer, so a random order could require more time or mental capacity for respondents to switch between questions.

Step 1: Define your goals and objectives

The first step of designing a questionnaire is determining your aims.

  • What topics or experiences are you studying?
  • What specifically do you want to find out?
  • Is a self-report questionnaire an appropriate tool for investigating this topic?

Once you’ve specified your research aims, you can operationalize your variables of interest into questionnaire items. Operationalizing concepts means turning them from abstract ideas into concrete measurements. Every question needs to address a defined need and have a clear purpose.

Step 2: Use questions that are suitable for your sample

Create appropriate questions by taking the perspective of your respondents. Consider their language proficiency and available time and energy when designing your questionnaire.

  • Are the respondents familiar with the language and terms used in your questions?
  • Would any of the questions insult, confuse, or embarrass them?
  • Do the response items for any closed-ended questions capture all possible answers?
  • Are the response items mutually exclusive?
  • Do the respondents have time to respond to open-ended questions?

Consider all possible options for responses to closed-ended questions. From a respondent’s perspective, a lack of response options reflecting their point of view or true answer may make them feel alienated or excluded. In turn, they’ll become disengaged or inattentive to the rest of the questionnaire.

Step 3: Decide on your questionnaire length and question order

Once you have your questions, make sure that the length and order of your questions are appropriate for your sample.

If respondents are not being incentivized or compensated, keep your questionnaire short and easy to answer. Otherwise, your sample may be biased with only highly motivated respondents completing the questionnaire.

Decide on your question order based on your aims and resources. Use a logical flow if your respondents have limited time or if you cannot randomize questions. Randomizing questions helps you avoid bias, but it can take more complex statistical analysis to interpret your data.

Step 4: Pretest your questionnaire

When you have a complete list of questions, you’ll need to pretest it to make sure what you’re asking is always clear and unambiguous. Pretesting helps you catch any errors or points of confusion before performing your study.

Ask friends, classmates, or members of your target audience to complete your questionnaire using the same method you’ll use for your research. Find out if any questions were particularly difficult to answer or if the directions were unclear or inconsistent, and make changes as necessary.

If you have the resources, running a pilot study will help you test the validity and reliability of your questionnaire. A pilot study is a practice run of the full study, and it includes sampling, data collection , and analysis. You can find out whether your procedures are unfeasible or susceptible to bias and make changes in time, but you can’t test a hypothesis with this type of study because it’s usually statistically underpowered .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Questionnaire Design | Methods, Question Types & Examples. Scribbr. Retrieved April 15, 2024, from https://www.scribbr.com/methodology/questionnaire/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, survey research | definition, examples & methods, what is a likert scale | guide & examples, reliability vs. validity in research | difference, types and examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Do Not Sell My Personal Info

Register For Free

  •  ⋅ 
  • Content Marketing

35 Content Marketing Statistics You Should Know

Stay informed with the latest content marketing statistics. Discover how optimized content can elevate your digital marketing efforts.

what type of research study is a survey

Content continues to sit atop the list of priorities in most marketing strategies, and there is plenty of evidence to support the reasoning.

Simply put, content marketing is crucial to any digital marketing strategy, whether running a small local business or a large multinational corporation.

After all, content in its many and evolving forms is indisputably the very lifeblood upon which the web and social media are based.

Modern SEO has effectively become optimized content marketing for all intents and purposes.

This is when Google demands and rewards businesses that create content demonstrating experience, expertise, authoritativeness, and trustworthiness (E-E-A-T) for their customers – content that answers all of the questions consumers may have about their services, products, or business in general.

Content marketing involves creating and sharing helpful, relevant, entertaining, and consistent content in various text, image, video, and audio-based formats to the plethora of traditional and online channels available to modern marketers.

The primary focus should be on attracting and retaining a clearly defined audience, with the ultimate goal of driving profitable customer action.

Different types of content can and should be created for each stage of a customer’s journey .

Some content, like blogs or how-to videos, are informative or educational. Meanwhile, other content, like promotional campaign landing pages , gets to the point of enticing prospective customers to buy.

But with so much content being produced and shared every day, it’s important to stay updated on the latest trends and best practices in content marketing to keep pace and understand what strategies may be most effective.

Never has this been more true than in 2024, when we’re in the midst of a content revolution led by generative AI , which some feel represents both an opportunity and a threat to marketers.

To help you keep up, here are 35 content marketing statistics I think you should know:

Content Marketing Usage

How many businesses are leveraging content marketing, and how are they planning to find success?

  • According to the Content Marketing Institute (CMI), 73% of B2B marketers, and 70% of B2C marketers use content marketing as part of their overall marketing strategy.
  • 97% of marketers surveyed by Semrush achieved success with their content marketing in 2023.
  • A B2B Content Marketing Study conducted by CMI found that 40% of B2B marketers have a documented content marketing strategy; 33% have a strategy, but it’s not documented, and 27% have no strategy.
  • Half of the surveyed marketers by CMI said they outsource at least one content marketing activity.

Content Marketing Strategy

What strategies are content marketers using or finding to be most effective?

  • 83% of marketers believe it’s more effective to create higher quality content less often. (Source: Hubspot)
  • In a 2022 Statista Research Study of marketers worldwide, 62% of respondents emphasized the importance of being “always on” for their customers, while 23% viewed content-led communications as the most effective method for personalized targeting efforts.
  • With the increased focus on AI-generated search engine results, 31% of B2B marketers say they are sharpening their focus on user intent/answering questions, 27% are creating more thought leadership content, and 22% are creating more conversational content. (Source: CMI)

Types Of Content

Content marketing was synonymous with posting blogs, but the web and content have evolved into audio, video, interactive, and meta formats.

Here are a few stats on how the various types of content are trending and performing.

  • Short-form video content, like TikTok and Instagram Reel, is the No. 1 content marketing format, offering the highest return on investment (ROI).
  • 43% of marketers reported that original graphics (like infographics and illustrations) were the most effective type of visual content. (Source: Venngage)
  • 72% of B2C marketers expected their organization to invest in video marketing in 2022. (Source: Content Marketing Institute – CMI)
  • The State of Content Marketing: 2023 Global Report by Semrush reveals that articles containing at least one video tend to attract 70% more organic traffic than those without.
  • Interactive content generates 52.6% more engagement compared to static content. On average, buyers spend 8.5 minutes viewing static content items and 13 minutes on interactive content items. (Source: Mediafly)

Content Creation

Creating helpful, unique, engaging content can be one of a marketer’s greatest challenges. However, innovative marketers are looking at generative AI as a tool to help ideate, create, edit, and analyze content quicker and more cost-effectively.

Here are some stats around content creation and just how quickly AI is changing the game.

  • Generative AI reached over 100 million users just two months after ChatGPT’s launch. (Source: Search Engine Journal)
  • A recent Ahrefs poll found that almost 80% of respondents had already adopted AI tools in their content marketing strategies.
  • Marketers who are using AI said it helps most with brainstorming new topics ( 51%) , researching headlines and keywords (45%), and writing drafts (45%). (Source: CMI)
  • Further, marketers polled by Hubspot said they save 2.5 hours per day using AI for content.

Content Distribution

It is not simply enough to create and publish content.

For a content strategy to be successful, it must include distributing content via the channels frequented by a business’s target audience.

  • Facebook is still the dominant social channel for content distribution, but video-centric channels like YouTube, TikTok, and Instagram are growing the fastest .  (Source: Hubspot)
  • B2B marketers reported to CMI that LinkedIn was the most common and top-performing organic social media distribution channel at 84% by a healthy margin. All other channels came in under 30%.
  • 80% of B2B marketers who use paid distribution use paid social media advertising. (Source: CMI)

Content Consumption

Once content reaches an audience, it’s important to understand how an audience consumes the content or takes action as a result.

  • A 2023 Content Preferences Study by Demand Gen reveals that 62% of B2B buyers prefer practical content like case studies to inform their purchasing decisions, citing “a need for valid sources.”
  • The same study also found that buyers tend to rely heavily on content when researching potential business solutions, with 46% reporting that they increased the amount of content they consumed during this time.
  • In a recent post, blogger Ryan Robinson reports the average reader spends 37 seconds reading a blog.
  • DemandGen’s survey participants also said they rely most on demos ( 62% ) and user reviews (55%) to gain valuable insights into how a solution will meet their needs.

Content Marketing Performance

One of the primary reasons content marketing has taken off is its ability to be measured, optimized, and tied to a return on investment.

  • B2C marketers reported to CMI that the top three goals content marketing helps them to achieve are creating brand awareness, building trust, and educating their target audience.
  • 87% of B2B marketers surveyed use content marketing successfully to generate leads.
  • 56% of marketers who leverage blogging say it’s an effective tactic, and 10% say it generates the greatest return on investment (ROI).
  • 94% of marketers said personalization boosts sales.

Content Marketing Budgets

Budget changes and the willingness to invest in specific marketing strategies are good indicators of how popular and effective these strategies are at a macro level.

The following stats certainly seem to indicate marketers have bought into the value of content.

  • 61% of B2C marketers said their 2022 content marketing budget would exceed their 2021 budget.
  • 22% of B2B marketers said they spent 50% or more of their total marketing budget on content marketing. Furthermore, 43% saw their content marketing budgets grow from 2020 to 2021, and 66% expected them to grow again in 2022.

Content Challenges

All forms of marketing come with challenges related to time, resources, expertise, and competition.

Recognizing and addressing these challenges head-on with well-thought-out strategies is the best way to overcome them and realize success.

  • Top 3 content challenges included “attracting quality leads with content” ( 45% ), “creating more content faster” (38%), and “generating content ideas” (35%). (Source: Semrush’s The State of Content Marketing: 2023 Global Report)
  • 44% of marketers polled for CMI’s 2022 B2B report highlighted the challenge of creating the right content for multi-level roles as their top concern. This replaced internal communication as the top challenge from the previous year.
  • Changes to SEO/search algorithms ( 64% ), changes to social media algorithms (53%), and data management/analytics (48%) are also among the top concerns for B2C marketers.
  • 47% of people are seeking downtime from internet-enabled devices due to digital fatigue.
  • While generative AI has noted benefits, it also presents challenges for some marketers who fear it may replace them. In Hubspot’s study, 23% said they felt we should avoid using generative AI.
  • Another challenge with AI is how quickly it has come onto the scene without giving organizations time to provide training or to create policies and procedures for its appropriate and legal use. According to CMI, when asked if their organizations have guidelines for using generative AI tools, 31% of marketers said yes, 61% said no, and 8% were unsure.

Time To Get Started

As you can clearly see and perhaps have already realized, content marketing can be a highly effective and cost-efficient way to generate leads, build brand awareness, and drive sales. Content, in its many formats, powers virtually all online interactions.

Generative AI is effectively helping to solve some of the time and resource challenges by acting as a turbo-powered marketing assistant, while also raising a few procedural concerns.

However, the demand for content remains strong.

Those willing to put in the work of building a documented content strategy and executing it – by producing, optimizing, distributing, and monitoring high-value, relevant, customer-centric content, with the help of AI or not – can reap significant business rewards.

More resources:

  • 6 Ways To Humanize Your Content In The AI Era
  • Interactive Content: 10 Types To Engage Your Audience
  • B2B Lead Generation: Create Content That Converts

Featured Image: Deemak Daksina/Shutterstock 

Jeff has been helping organizations manage, measure and optimize their Web presences for over 20 years. He has deep knowledge ...

Subscribe To Our Newsletter.

Conquer your day with daily search marketing news.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Ann R Coll Surg Engl
  • v.95(1); 2013 Jan

A quick guide to survey research

1 University of Cambridge,, UK

2 Cambridge University Hospitals NHS Foundation Trust,, UK

Questionnaires are a very useful survey tool that allow large populations to be assessed with relative ease. Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs extensive planning, time and effort. In this article, we aim to cover the main aspects of designing, implementing and analysing a survey as well as focusing on techniques that would improve response rates.

Medical research questionnaires or surveys are vital tools used to gather information on individual perspectives in a large cohort. Within the medical realm, there are three main types of survey: epidemiological surveys, surveys on attitudes to a health service or intervention and questionnaires assessing knowledge on a particular issue or topic. 1

Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs extensive planning, time and effort. In this article, we aim to cover the main aspects of designing, implementing and analysing a survey as well as focusing on techniques that would improve response rates.

Clear research goal

The first and most important step in designing a survey is to have a clear idea of what you are looking for. It will always be tempting to take a blanket approach and ask as many questions as possible in the hope of getting as much information as possible. This type of approach does not work as asking too many irrelevant or incoherent questions reduces the response rate 2 and therefore reduces the power of the study. This is especially important when surveying physicians as they often have a lower response rate than the rest of the population. 3 Instead, you must carefully consider the important data you will be using and work on a ‘need to know’ rather than a ‘would be nice to know’ model. 4

After considering the question you are trying to answer, deciding whom you are going to ask is the next step. With small populations, attempting to survey them all is manageable but as your population gets bigger, a sample must be taken. The size of this sample is more important than you might expect. After lost questionnaires, non-responders and improper answers are taken into account, this sample must still be big enough to be representative of the entire population. If it is not big enough, the power of your statistics will drop and you may not get any meaningful answers at all. It is for this reason that getting a statistician involved in your study early on is absolutely crucial. Data should not be collected until you know what you are going to do with them.

Directed questions

After settling on your research goal and beginning to design a questionnaire, the main considerations are the method of data collection, the survey instrument and the type of question you are going to ask. Methods of data collection include personal interviews, telephone, postal or electronic ( Table 1 ).

Advantages and disadvantages of survey methods

Collected data are only useful if they convey information accurately and consistently about the topic in which you are interested. This is where a validated survey instrument comes in to the questionnaire design. Validated instruments are those that have been extensively tested and are correctly calibrated to their target. They can therefore be assumed to be accurate. 1 It may be possible to modify a previously validated instrument but you should seek specialist advice as this is likely to reduce its power. Examples of validated models are the Beck Hopelessness Scale 5 or the Addenbrooke’s Cognitive Examination. 6

The next step is choosing the type of question you are going to ask. The questionnaire should be designed to answer the question you want answered. Each question should be clear, concise and without bias. Normalising statements should be included and the language level targeted towards those at the lowest educational level in your cohort. 1 You should avoid open, double barrelled questions and those questions that include negative items and assign causality. 1 The questions you use may elicit either an open (free text answer) or closed response. Open responses are more flexible but require more time and effort to analyse, whereas closed responses require more initial input in order to exhaust all possible options but are easier to analyse and present.

Questionnaire

Two more aspects come into questionnaire design: aesthetics and question order. While this is not relevant to telephone or personal questionnaires, in self-administered surveys the aesthetics of the questionnaire are crucial. Having spent a large amount of time fine-tuning your questions, presenting them in such a way as to maximise response rates is pivotal to obtaining good results. Visual elements to think of include smooth, simple and symmetrical shapes, soft colours and repetition of visual elements. 7

Once you have attracted your subject’s attention and willingness with a well designed and attractive survey, the order in which you put your questions is critical. To do this you should focus on what you need to know; start by placing easier, important questions at the beginning, group common themes in the middle and keep questions on demographics to near the end. The questions should be arrayed in a logical order, questions on the same topic close together and with sensible sections if long enough to warrant them. Introductory and summary questions to mark the start and end of the survey are also helpful.

Pilot study

Once a completed survey has been compiled, it needs to be tested. The ideal next step should highlight spelling errors, ambiguous questions and anything else that impairs completion of the questionnaire. 8 A pilot study, in which you apply your work to a small sample of your target population in a controlled setting, may highlight areas in which work still needs to be done. Where possible, being present while the pilot is going on will allow a focus group-type atmosphere in which you can discuss aspects of the survey with those who are going to be filling it in. This step may seem non-essential but detecting previously unconsidered difficulties needs to happen as early as possible and it is important to use your participants’ time wisely as they are unlikely to give it again.

Distribution and collection

While it should be considered quite early on, we will now discuss routes of survey administration and ways to maximise results. Questionnaires can be self-administered electronically or by post, or administered by a researcher by telephone or in person. The advantages and disadvantages of each method are summarised in Table 1 . Telephone and personal surveys are very time and resource consuming whereas postal and electronic surveys suffer from low response rates and response bias. Your route should be chosen with care.

Methods for maximising response rates for self-administered surveys are listed in Table 2 , taken from a Cochrane review.2 The differences between methods of maximising responses to postal or e-surveys are considerable but common elements include keeping the questionnaire short and logical as well as including incentives.

Methods for improving response rates in postal and electronic questionnaires 2

  • – Involve a statistician early on.
  • – Run a pilot study to uncover problems.
  • – Consider using a validated instrument.
  • – Only ask what you ‘need to know’.
  • – Consider guidelines on improving response rates.

The collected data will come in a number of forms depending on the method of collection. Data from telephone or personal interviews can be directly entered into a computer database whereas postal data can be entered at a later stage. Electronic questionnaires can allow responses to go directly into a computer database. Problems arise from errors in data entry and when questionnaires are returned with missing data fields. As mentioned earlier, it is essential to have a statistician involved from the beginning for help with data analysis. He or she will have helped to determine the sample size required to ensure your study has enough power. The statistician can also suggest tests of significance appropriate to your survey, such as Student’s t-test or the chi-square test.

Conclusions

Survey research is a unique way of gathering information from a large cohort. Advantages of surveys include having a large population and therefore a greater statistical power, the ability to gather large amounts of information and having the availability of validated models. However, surveys are costly, there is sometimes discrepancy in recall accuracy and the validity of a survey depends on the response rate. Proper design is vital to enable analysis of results and pilot studies are critical to this process.

what type of research study is a survey

  International Journal of Applied Technologies in Library and Information Management Journal / International Journal of Applied Technologies in Library and Information Management / Vol. 9 No. 2 (2023) / Articles (function() { function async_load(){ var s = document.createElement('script'); s.type = 'text/javascript'; s.async = true; var theUrl = 'https://www.journalquality.info/journalquality/ratings/2404-www-ajol-info-jatlimi'; s.src = theUrl + ( theUrl.indexOf("?") >= 0 ? "&" : "?") + 'ref=' + encodeURIComponent(window.location.href); var embedder = document.getElementById('jpps-embedder-ajol-jatlimi'); embedder.parentNode.insertBefore(s, embedder); } if (window.attachEvent) window.attachEvent('onload', async_load); else window.addEventListener('load', async_load, false); })();  

Article sidebar.

Open Access

Article Details

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License .

Main Article Content

Availability and utilization of information resources for students in university of calabar library, cross river state, comfort linus inyang, martina ekpenyong ekeng.

This study was to investigate the Availability and Utilization of Information Resources for Students in University of Calabar Library Cross  River State, Nigeria. To achieve the purpose of this study, three research questions were formulated to guide the study. Literature review  was done according to the variables under study. Survey research design was adopted for the study. Four thousand and eighty (4080)  was the total population of the study. A Sample of two hundred and four (204) registered library students or users were selected for the  study using simple random and accidental sampling techniques. Questionnaire was the main instrument for collection of data. The  instrument for data collection was titled: Availability and Utilization of Information Resources Questionnaire (AUIRQ). The instrument was  subjected to content validation by the project supervisor and the reliability was established through the test re-test survey. Frequency count, percentages, mean and standard deviation were the statistical analysis techniques adopted to test the research  questions under study. The result of the analysis revealed that different types of information resources are available in the library, the  extent that availability of information influences the utilization of resources by student is high, also that information resources were  accessible by students. Based on the findings of the study, it was recommended among others that University libraries should market  their resources and services to attract users.

AJOL is a Non Profit Organisation that cannot function without donations. AJOL and the millions of African and international researchers who rely on our free services are deeply grateful for your contribution. AJOL is annually audited and was also independently assessed in 2019 by E&Y.

Your donation is guaranteed to directly contribute to Africans sharing their research output with a global readership.

  • For annual AJOL Supporter contributions, please view our Supporters page.

Journal Identifiers

what type of research study is a survey

  • Open access
  • Published: 14 October 2023

A scoping review of ‘Pacing’ for management of Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS): lessons learned for the long COVID pandemic

  • Nilihan E. M. Sanal-Hayes 1 , 7 ,
  • Marie Mclaughlin 1 , 8 ,
  • Lawrence D. Hayes 1 ,
  • Jacqueline L. Mair   ORCID: orcid.org/0000-0002-1466-8680 2 , 3 ,
  • Jane Ormerod 4 ,
  • David Carless 1 ,
  • Natalie Hilliard 5 ,
  • Rachel Meach 1 ,
  • Joanne Ingram 6 &
  • Nicholas F. Sculthorpe 1  

Journal of Translational Medicine volume  21 , Article number:  720 ( 2023 ) Cite this article

3312 Accesses

5 Citations

21 Altmetric

Metrics details

Controversy over treatment for people with myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) is a barrier to appropriate treatment. Energy management or pacing is a prominent coping strategy for people with ME/CFS. Whilst a definitive definition of pacing is not unanimous within the literature or healthcare providers, it typically comprises regulating activity to avoid post exertional malaise (PEM), the worsening of symptoms after an activity. Until now, characteristics of pacing, and the effects on patients’ symptoms had not been systematically reviewed. This is problematic as the most common approach to pacing, pacing prescription, and the pooled efficacy of pacing was unknown. Collating evidence may help advise those suffering with similar symptoms, including long COVID, as practitioners would be better informed on methodological approaches to adopt, pacing implementation, and expected outcomes.

In this scoping review of the literature, we aggregated type of, and outcomes of, pacing in people with ME/CFS.

Eligibility criteria

Original investigations concerning pacing were considered in participants with ME/CFS.

Sources of evidence

Six electronic databases (PubMed, Scholar, ScienceDirect, Scopus, Web of Science and the Cochrane Central Register of Controlled Trials [CENTRAL]) were searched; and websites MEPedia, Action for ME, and ME Action were also searched for grey literature, to fully capture patient surveys not published in academic journals.

A scoping review was conducted. Review selection and characterisation was performed by two independent reviewers using pretested forms.

Authors reviewed 177 titles and abstracts, resulting in 17 included studies: three randomised control trials (RCTs); one uncontrolled trial; one interventional case series; one retrospective observational study; two prospective observational studies; four cross-sectional observational studies; and five cross-sectional analytical studies. Studies included variable designs, durations, and outcome measures. In terms of pacing administration, studies used educational sessions and diaries for activity monitoring. Eleven studies reported benefits of pacing, four studies reported no effect, and two studies reported a detrimental effect in comparison to the control group.

Conclusions

Highly variable study designs and outcome measures, allied to poor to fair methodological quality resulted in heterogenous findings and highlights the requirement for more research examining pacing. Looking to the long COVID pandemic, our results suggest future studies should be RCTs utilising objectively quantified digitised pacing, over a longer duration of examination (i.e. longitudinal studies), using the core outcome set for patient reported outcome measures. Until these are completed, the literature base is insufficient to inform treatment practises for people with ME/CFS and long COVID.

Introduction

Post-viral illness occurs when individuals experience an extended period of feeling unwell after a viral infection [ 1 , 2 , 3 , 4 , 5 , 6 ]. While post-viral illness is generally a non-specific condition with a constellation of symptoms that may be experienced, fatigue is amongst the most commonly reported [ 7 , 8 , 9 ]. For example, our recent systematic review found there was up to 94% prevalence of fatigue in people following acute COVID-19 infection [ 3 ]. The increasing prevalence of long COVID has generated renewed interest in symptomology and time-course of post-viral fatigue, with PubMed reporting 72 articles related to “post-viral fatigue” between 2020 and 2022, but less than five for every year since 1990.

As the coronavirus pandemic developed, it became clear that a significant proportion of the population experienced symptoms which persisted beyond the initial viral infection, meeting the definition of a post-viral illness. Current estimates suggest one in eight people develop long COVID [ 10 ] and its symptomatology has repeatedly been suggested to overlap with clinical demonstrations of myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS). In a study by Wong and Weitzer [ 11 ], long COVID symptoms from 21 studies were compared to a list of ME/CFS symptoms. Of the 29 known ME/CFS symptoms the authors reported that 25 (86%) were reported in at least one long COVID study suggesting significant similarities. Sukocheva et al. [ 12 ] reported that long COVID included changes in immune, cardiovascular, metabolic, gastrointestinal, nervous and autonomic systems. When observed from a pathological stance, this list of symptoms is shared with, or is similar to, the symptoms patients with ME/CFS describe [ 13 ]. In fact, a recent article reported 43% of people with long COVID are diagnosed with ME/CFS [ 13 ], evidencing the analogous symptom loads.

A striking commonality between long COVID and similar conditions such as ME/CFS is the worsening of symptoms including fatigue, pain, cognitive difficulties, sore throat, and/or swollen lymph nodes following exertion. Termed post exertional malaise (PEM) [ 14 , 15 , 16 , 17 ], lasting from hours to several days, it is arguably one of the most debilitating side effects experienced by those with ME/CFS [ 16 , 17 , 18 ]. PEM is associated with considerably reduced quality of life amongst those with ME/CFS, with reduced ability to perform activities of daily living, leading to restraints on social and family life, mental health comorbidities such as depression and anxiety, and devastating employment and financial consequences [ 19 , 20 , 21 , 22 ]. At present, there is no cure or pharmacological treatments for PEM, and therefore, effective symptom management strategies are required. This may be in part because the triggers of PEM are poorly understood, and there is little evidence for what causes PEM, beyond anecdotal evidence. The most common approach to manage PEM is to incorporate activity pacing into the day-to-day lives of those with ME/CFS with the intention of reducing the frequency of severity of bouts of PEM [ 23 ]. Pacing is defined as an approach where patients are encouraged to be as active as possible within the limits imposed by the illness [ 23 , 24 , 25 ]. In practice, pacing requires individuals to determine a level at which they can function, but which does not lead to a marked increase in fatigue and other symptoms [ 26 , 27 ].

Although long COVID is a new condition [ 3 , 14 ], the available evidence suggests substantial overlap with the symptoms of conditions such as ME/CFS and it is therefore pragmatic to consider the utility of management strategies (such as pacing) used in ME/CFS for people with long COVID. In fact, a recent Delphi study recommended that management of long COVID should incorporate careful pacing to avoid PEM relapse [ 28 ]. This position was enforced by a multidisciplinary consensus statement considering treatment of fatigue in long COVID, recommending energy conservation strategies (including pacing) for people with long COVID [ 29 ]. Given the estimated > 2 million individuals who have experienced long COVID in the UK alone [ 30 , 31 , 32 ], there is an urgent need for evidence-based public health strategies. In this context, it seems pragmatic to borrow from the ME/CFS literature.

From a historical perspective, the 2007 NICE guidelines for people with ME/CFS advised both cognitive behavioural therapy (CBT) and graded exercise therapy (GET) should be offered to people with ME/CFS [ 33 ]. As of the 2021 update, NICE guidelines for people with ME/CFS do not advise CBT or GET, and the only recommended management strategy is pacing [ 34 ]. In the years between changes to these guidelines, the landmark PACE trial [ 35 ] was published in 2011. This large, randomised control trial (RCT; n = 639) compared pacing with CBT and reported GET and CBT were more effective than pacing for improving symptoms. Yet, this study has come under considerable criticism from patient groups and clinicians alike [ 36 , 37 , 38 , 39 ]. This may partly explain why NICE do not advise CBT or GET as of 2021, and only recommend pacing for symptom management people with ME/CFS [ 34 ]. There has been some controversy over best treatment for people with ME/CFS in the literature and support groups, potentially amplified by the ambiguity of evidence for pacing efficacy and how pacing should be implemented. As such, before pacing can be advised for people with long COVID, it is imperative previous literature concerning pacing is systematically reviewed. This is because a consensus is needed within the literature for implementing pacing so practitioners treating people with ME/CFS or long COVID can do so effectively. A lack of agreement in pacing implementation is a barrier to adoption for both practitioners and patients. Despite several systematic reviews concerning pharmacological interventions or cognitive behavioural therapy in people with ME/CFS [ 36 , 40 , 41 ], to date, there are no systematic reviews concerning pacing.

Despite the widespread use of pacing, the literature base is limited and includes clinical commentaries, case studies, case series, and few randomised control trials. Consequently, while a comprehensive review of the effects of pacing in ME/CFS is an essential tool to guide symptom management advice, the available literature means that effective pooling of data is not feasible [ 42 ] and therefore, a traditional systematic review and meta-analysis, with a tightly focussed research question would be premature [ 43 ]. Consequently, we elected to undertake a scoping review. This approach retains the systematic approach to literature searching but aims to map out the current state of the research [ 43 ]. Using the framework of Arksey and O'Malley [ 44 ], a scoping review aims to use a broad set of search terms and include a wide range of study designs and methods (in contrast to a systematic review [ 44 ]). This approach, has the benefit of clarifying key concepts, surveying current data collection approaches, and identifying critical knowledge gaps.

We aimed to provide an overview of existing literature concerning pacing in ME/CFS. Our three specific objectives of this scoping review were to (1) conduct a systematic search of the published literature concerning ME/CFS and pacing, (2) map characteristics and methodologies used, and (3) provide recommendations for the advancement of the research area.

Protocol and registration

The review was conducted and reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for scoping reviews (PRISMA-ScR) guidelines [ 45 ] and the five-stage framework outlined in Arksey and O’Malley [ 44 ]. Registration is not recommended for scoping reviews.

Studies that met the following criteria were included in this review: (1) published as a full-text manuscript; (2) not a review; (3) participants with ME/CFS; (4) studies employed a pacing intervention or retrospective analysis of pacing or a case study of pacing. Studies utilising sub-analysis of the pacing, graded activity, and cognitive behaviour therapy: a randomised evaluation (PACE) trial were included as these have different outcome measures and, as this is not a meta-analysis, this will not influence effect size estimates. Additionally, due to the paucity of evidence, grey literature has also been included in this review.

Search strategy

The search strategy consisted of a combination of free-text and MeSH terms relating to ME/CFS and pacing, which were developed through an examination of published original literature and review articles. Example search terms for PubMed included: ‘ME/CFS’ OR ‘ME’ OR ‘CFS’ OR ‘chronic fatigue syndrome’ OR ‘PEM’ OR ‘post exertional malaise’ OR ‘pene’ OR ‘post-exertion neurogenic exhaust’ AND ‘pacing’ OR ‘adaptive pacing’. The search was performed within title/abstract. Full search terms can be found in Additional file 1 .

Information sources

Six electronic databases [PubMed, Scholar, ScienceDirect, Scopus, Web of Science, and the Cochrane Central Register of Controlled Trials (CENTRAL)] were searched to identify original research articles published from the earliest available date up until 02/02/2022. Additional records were identified through reference lists of included studies. ‘Grey literature’ repositories including MEPedia, Action for ME, and ME Action were also searched with the same terms.

Study selection and data items

Once each database search was completed and manuscripts were sourced, all studies were downloaded into a single reference list (Zotero, version 6.0.23) and duplicates were removed. Titles and abstracts were screened for eligibility by two reviewers independently and discrepancies were resolved through discussion between reviewers. Subsequently, full text papers of potentially relevant studies were retrieved and assessed for eligibility by the same two reviewers independently. Any uncertainty by reviewers was discussed in consensus meetings and resolved by agreement. Data extracted from each study included sample size, participant characteristics, study design, trial registration details, study location, pacing description (type), intervention duration, intervention adherence, outcome variables, and main outcome data. Descriptions were extracted with as much detail as was provided by the authors. Study quality was assessed using the Physiotherapy Evidence Database (PEDro) scale [ 46 , 47 ].

Role of the funding source

The study sponsors had no role in study design, data collection, analysis, or interpretation, nor writing the report, nor submitting the paper for publication.

Study selection

After the initial database search, 281 records were identified (see Fig.  1 ). Once duplicates were removed, 177 titles and abstracts were screened for inclusion resulting in 22 studies being retrieved as full text and assessed for eligibility. Of those, five were excluded, and 17 articles remained and were used in the final qualitative synthesis.

figure 1

Schematic flow diagram describing exclusions of potential studies and final number of studies. RCT = randomized control trial. CT = controlled trial. UCT = uncontrolled trial

Study characteristics

Study characteristics are summarised in Table 1 . Of the 17 studies included, three were randomised control trials (RCTs [ 35 , 48 , 49 ]); one was an uncontrolled trial [ 50 ]; one was a case series [ 51 ]; one was a retrospective observational study [ 52 ], two were prospective observational studies [ 53 , 54 ]; four were cross-sectional observational studies [ 25 , 55 , 56 ]; and five were cross-sectional analytical studies [ 57 , 58 , 59 , 60 , 61 ] including sub-analysis of the PACE trial [ 35 , 56 , 59 , 61 ]. Seven of the studies were registered trials [ 35 , 48 , 49 , 50 , 56 , 57 , 58 ]. Diagnostic criteria for ME/CFS are summarised in Table 2 .

Types of pacing

Pacing interventions.

Of the 17 studies included, five implemented their own pacing interventions and will be discussed in this section. Sample sizes ranged from n = 7 in an interventional case series [ 51 ] to n = 641 participants in the largest RCT [ 35 ]. The first of these five studies considered an education session on pacing and self-management as the ‘pacing’ group, and a ‘pain physiology education’ group as the control group [ 49 ]. Two studies included educational sessions provided by a therapist plus activity monitoring via ActiGraph accelerometers [ 51 ] and diaries [ 48 ] at baseline and follow-up. In the first of these two studies, Nijs and colleagues [ 51 ] implemented a ‘self-management program’ which asked patients to estimate their current physical capabilities prior to commencing an activity and then complete 25–50% less than their perceived energy envelope. They[ 51 ] did not include a control group and had a sample size of only n = 7. Six years later, the same research group [ 48 ] conducted another pacing study which utilised relaxation as a comparator group (n = 12 and n = 14 in the pacing and relaxation groups, respectively). The pacing group underwent a pacing phase whereby participants again aimed to complete 25–50% less than their perceived energy envelope, followed by a gradual increase in exercise after the pacing phase (the total intervention spanned three weeks, and it is unclear how much was allocated to pacing, and how much to activity increase). Therefore, it could be argued that Kos et al. [ 48 ] really assessed pacing followed by a gradual exercise increase as outcome measures were assessed following the graded activity phase. Another pacing intervention delivered weekly educational sessions for six weeks and utilised a standardised rehabilitation programme using the ‘activity pacing framework’ [ 50 ] in a single-arm, no comparator group feasibility study. Finally, the PACE trial adopted an adaptive pacing therapy intervention consisting of occupational therapists helping patients to plan and pace activities utilising activity diaries to identify activities associated with fatigue and staying within their energy envelope [ 35 ]. This study incorporated standard medical care, cognitive behavioural therapy (CBT) and graded exercise therapy (GET) as comparator groups [ 35 ]. It is worth noting that the pacing group and the CBT group were both ‘encouraged’ to increase physical activity levels as long as participants did not exceed their energy envelope. Although not all five intervention studies explicitly mentioned the “Energy Envelope Theory”, which dictates that people with ME/CFS should not necessarily increase or decrease their activity levels, but moderate activity and practice energy conservation [ 62 ], all intervention studies used language analogous to this theory, such as participants staying within limits, within capacity, or similar.

The interventions included in this review were of varying durations, from a single 30-min education session [ 49 ], a 3-week (one session a week) educational programme [ 51 ], a 3-week (3 × 60–90 min sessions/week) educational programme [ 48 ], a 6-week rehabilitation programme [ 50 ], to a 24-week programme [ 35 ]. Intervention follow-up durations also varied across studies from immediately after [ 49 ], 1-week [ 51 ], 3-weeks [ 48 ], 3-months [ 50 ], and 1-year post-intervention [ 35 ].

Observational studies of pacing

Eight studies were observational and, therefore, included no intervention. Observational study sample sizes ranged from 16 in a cross-sectional interview study [ 25 ] to 1428 in a cross-sectional survey [ 52 ]. One study involved a retrospective analysis of participants’ own pacing strategies varying from self-guided pacing or pacing administered by a therapist compared with implementation of CBT and GET [ 52 ]. Five involved a cross-sectional analysis of participants own pacing strategies which varied from activity adjustment, planning and acceptance [ 50 , 55 ], and the Energy Envelope method [ 58 , 60 ]. Two studies were prospective observational studies investigating the Energy Envelope theory [ 53 , 54 ]. Four studies [ 56 , 57 , 59 , 61 ] included in this review involved sub-analysis of results of the PACE trial [ 35 ].

Outcome measures

Quantitative health outcomes.

ME/CFS severity and general health status were the most common outcome measures across studies (16/17) [ 35 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 63 ]. Studies utilised different instruments, including the Short-Form 36 (SF-36; 8/16) [ 35 , 51 , 53 , 54 , 56 , 57 , 58 , 60 ], SF-12 (2/16) [ 50 , 63 ], ME symptom and illness severity (2/16) [ 52 , 55 ], Patient health (PHQ-15; 1/16) [ 59 ], DePaul symptom questionnaire (DSQ; 1/16) [ 58 ], and the Patient health questionnaire-9 (1/16) [ 50 ]. Additionally, some studies used diagnostic criteria for ME/CFS as an outcome measure to determine recovery [ 57 , 59 , 61 ].

Pain was assessed by most included studies (11/17) [ 35 , 49 , 50 , 51 , 53 , 54 , 55 , 57 , 59 , 60 , 61 , 63 ]. Two studies [ 59 , 61 ] included the international CDC criteria for CFS which contain five painful symptoms central to a diagnosis of CFS: muscle pain and joint pain. Other methods of assessment included Brief Pain Inventory (1/11) [ 53 ], Chronic Pain Coping Inventory (CPCI; 1/11) [ 49 ], Pain Self Efficacy Questionnaire (PSEQ; 1/11) [ 50 ], Tampa Scale for Kinesiophobia–version CFS (1/11) [ 49 ], algometry (1/11) [ 49 ], Knowledge of Neurophysiology of Pain Test (1/12) [ 49 ], Pain Catastrophizing Scale (1/11) [ 49 ], Pain Anxiety Symptoms Scale short version (PASS-20; 1/11) [ 50 ], Pain Numerical Rating Scale (NRS; 1/11) [ 63 ].

Fatigue or post-exertional malaise was assessed by 11 of the 17 studies [ 35 , 48 , 50 , 51 , 53 , 54 , 56 , 57 , 60 , 61 , 63 ]. Again, measurement instruments were divergent between studies and included the Chalder Fatigue Questionnaire (CFQ; 4/11) [ 35 , 50 , 57 , 63 ], Fatigue Severity Scale (2/11) [ 53 , 60 ], the Chronic Fatigue Syndrome Medical Questionnaire (1/11) [ 60 ], and Checklist Individual Strength (CIS; 2/11) [ 48 , 51 ].

Anxiety and depression were also common outcome measures, utilised by four studies (4/17) [ 50 , 53 , 59 , 63 ]. These were also assessed using different instruments including Hospital Anxiety and Depression Scale (HADS; 2/4) [ 59 , 63 ], Generalised Anxiety Disorder Assessment (1/4 [ 50 ]), Beck Depression Inventory (BDI-II; 1/4) [ 53 ], Beck Anxiety Inventory (BAI; 1/4) [ 53 ], and Perceived Stress Scale (PSS; 1/4) [ 53 ].

Outcome measures also included sleep (2/17) [ 53 , 59 ], assessed by The Pittsburgh Sleep Quality Index (1/2) [ 53 ] and Jenkins sleep scale (1/2) [ 59 ]; and quality of life (2/17) [ 50 , 53 ] as assessed by the EuroQol five-dimensions, five-levels (EQ-5D-5L; 1/2) [ 50 ] and The Quality-of-Life Scale (1/2) [ 53 ]. Self-Efficacy was measured in four studies [ 50 , 53 , 59 , 60 ], assessed by the Brief Coping Orientation to Problems Experienced Scale (bCOPE; 1/4) [ 60 ] and the Chronic Disease Self-Efficacy measure (3/4) [ 50 , 53 , 59 ].

Quantitative evaluation of pacing

Some studies (4/17) [ 25 , 50 , 52 , 63 ] included assessments of the participants’ experiences of pacing, using the Activity Pacing Questionnaire (APQ-28; 1/4 [ 50 ], APQ-38 (2/4) [ 25 , 63 ]), a re-analysis of the 228 question survey regarding treatment (1/4) [ 52 ] originally produced by the ME Association [ 55 ], and qualitative semi-structured telephone interviews regarding appropriateness of courses in relation to individual patient needs (1/4) [ 25 ]. The APQ-28 and -38 have been previously validated, but the 228-question survey has not. When outcome measures included physical activity levels (4/17), the Canadian Occupational Performance Measure (COPM) was used in two studies [ 48 , 51 ], and two studies used accelerometers to record physical activity [ 51 , 54 ]. Of these two studies, Nijs [ 51 ] examined accelerometery after a 3-week intervention based on the Energy Envelope Theory and Brown et al. [ 54 ] evaluated the Energy Envelope Theory of pacing over 12 months.

Other outcomes

Two [ 53 , 59 ] of the 17 studies included structured clinical interviews for the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) to assess psychiatric comorbidity and psychiatric exclusions. One study included a disability benefits questionnaire [ 55 ], and one study included employment and education questionnaire [ 55 ]. Additionally, satisfaction of primary care was also used as an outcome measure (2/17) [ 25 , 55 ] assessed using the Chronic Pain Coping Inventory (CPCI).

Efficacy of pacing interventions

The majority of studies (12/17) [ 25 , 48 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 58 , 60 , 63 ] highlighted improvements in at least one outcome following pacing (Fig.  2 ). When the effect of pacing was assessed by ME symptomology and general health outcomes, studies reported pacing to be beneficial [ 25 , 50 , 51 , 53 , 54 , 55 , 56 , 58 ]. It is worth noting however that pacing reportedly worsened ME symptoms in 14% of survey respondents, whilst improving symptoms in 44% of respondents [ 52 ]. Most studies using fatigue as an outcome measure reported pacing to be efficacious (7/10) [ 50 , 51 , 53 , 54 , 56 , 60 , 63 ]. However, one study reported no change in fatigue with a pacing intervention (1/10) [ 35 ], and 2/10 studies [ 53 , 63 ] reported a worsening of fatigue with pacing. Physical function was used to determine the efficacy of pacing in 11 studies [ 35 , 48 , 50 , 51 , 53 , 54 , 56 , 58 , 59 , 60 , 63 ]. Of these, the majority found pacing improved physical functioning (8/10) [ 48 , 50 , 51 , 53 , 54 , 56 , 58 , 60 ], with 1/10 [ 35 ] studies reporting no change in physical functioning, and 1/10 [ 59 ] reporting a worsening of physical functioning from pre- to post-pacing. Of the seven studies [ 35 , 49 , 50 , 51 , 53 , 54 , 60 ] which used pain to assess pacing efficacy, 4/7 [ 50 , 51 , 53 , 60 ] reported improvements in pain and 3/7 [ 35 , 51 , 53 ] reported no change in pain scores with pacing. All studies reporting quality of life (1/1) [ 53 ], self-efficacy (3/3) [ 50 , 53 , 59 ], sleep (2/2) [ 53 , 59 ], and depression and anxiety (4/4) [ 50 , 53 , 59 , 63 ], found pacing to be efficacious for ME/CFS participants.

figure 2

Bubble plot displaying number of studies reporting each domain (x-axis) and the percentage of studies reporting improvement with pacing (y-axis), including a coloured scale of improvement from 0–100%. PEM = post-exertional malaise, 6MWT = 6-min walk time, CFS = chronic fatigue syndrome, DSQ = DePaul Symptom Questionnaire, PA = Physical Activity, HRQOL = Health-related quality of life, COPM = The Canadian Occupational Performance Measure

Participant characteristics

The majority of studies (10/17) [ 25 , 50 , 52 , 53 , 54 , 58 , 59 , 60 , 61 , 63 ] did not report age of the participants. For those which did report age, this ranged from 32 ± 14 to 43 ± 13 years. Where studies reported sex (11/17) [ 35 , 48 , 49 , 50 , 51 , 54 , 55 , 56 , 57 , 58 , 60 ], this was predominantly female, ranging from 75 to 100% female. Only six studies [ 35 , 54 , 56 , 57 , 58 , 60 ] reported ethnicity, with cohorts predominantly Caucasian (94–98%). Time since diagnosis was mostly unreported (12/17) [ 25 , 48 , 49 , 50 , 52 , 53 , 54 , 58 , 59 , 60 , 61 , 63 ] but ranged from 32 to 96 months, with a cross-sectional survey reporting 2% of the participants were diagnosed 1–2 years previously; 6% 3–4 years since diagnosis; 13% 3–4 years since diagnosis; 12% 5–6 years since diagnosis; 20% 7–10 years since diagnosis; 29% 11–21 years since diagnosis; 13% 21–30 years since diagnosis; and 5% > 30 years since diagnosis. Of the studies which reported comorbidities of the participants (6/17) [ 25 , 35 , 50 , 56 , 57 , 63 ], the comorbidities were chronic pain, depressive disorder, psychiatric disorder.

Study location

Of the 17 studies, 14 were from Europe [ 25 , 35 , 48 , 49 , 50 , 51 , 52 , 55 , 56 , 57 , 58 , 59 , 61 , 63 ], and three from North America [ 53 , 54 , 60 ]. Of the 14 studies[ 25 , 35 , 48 , 49 , 50 , 51 , 52 , 55 , 56 , 57 , 58 , 59 , 61 , 63 ] from Europe, ten [ 25 , 35 , 50 , 52 , 55 , 56 , 57 , 58 , 59 , 61 , 63 ] were conducted in the United Kingdom, three in Belgium [ 48 , 49 , 51 ], and one was a multicentred study between the United Kingdom and Norway [ 58 ].

Recruitment strategy

Of the 17 studies, three [ 53 , 54 , 60 ] used announcements in a newspaper and physician referrals to recruit participants, two [ 50 , 63 ] recruited patients referred by a consultant from a National Health Service (NHS) Trust following a pain diagnosis, two [ 52 , 55 ] concerned online platforms on the web, two [ 59 , 61 ] recruited from secondary care clinics, and two used the PACE trial databases [ 56 , 57 ]. Moreover, one study recruited from the hospital [ 58 ], one from physiotherapist referrals [ 25 ], two from specialist clinic centres [ 35 , 64 ], one from waiting list of rehabilitation centre [ 48 ], and one from medical files [ 49 ].

Study settings

Ten studies were carried out in hospital and clinic setting [ 25 , 35 , 48 , 49 , 50 , 51 , 58 , 59 , 61 , 63 ]. Two studies were performed on online platforms [ 52 , 55 ]. Three studies did not report study setting [ 53 , 54 , 60 ]. Two studies generated output from PACE trial databases [ 56 , 57 ]

Adherence and feasibility

All five intervention studies reported adherence rates (which they defined as number of sessions attended), which ranged from 4–44% (4% [ 49 ], 8% [ 35 ], 25% [ 48 ], 29% [ 51 ], and 44% [ 50 ]). One study reported the median number of rehabilitation programme sessions attended was five out of six possible sessions, with 58.9% [ 50 ] participants attending ≥ 5 sessions; 83.2% participants attending at least one educational session on activity pacing and 56.1% attending both activity pacing sessions.

This scoping review summarises the existing literature, with a view to aid physicians and healthcare practitioners better summarise evidence for pacing in ME/CFS and use this knowledge for other post-viral fatiguing conditions. Overall, studies generally reported pacing to be beneficial for people with ME/CFS. The exception to this trend is the controversial PACE trial [ 36 , 37 , 38 , 39 ], which we will expand on in subsequent sections. We believe information generated within this review can facilitate discussion of research opportunities and issues that need to be addressed in future studies concerning pacing, particularly given the immediate public health issue of the long COVID pandemic. As mentioned, we found some preliminary evidence for improved symptoms following pacing interventions or strategies. However, we wish to caution the reader that the current evidence base is extremely limited and hampered by several limitations which preclude clear conclusions on the efficacy of pacing. Firstly, studies were of poor to fair methodological quality (indicated by the PEDro scores), often with small sample sizes, and therefore unknown power to detect change. Moreover, very few studies implemented pacing, with most studies merely consulting on people’s views on pacing. This may of course lead to multiple biases such as reporting, recruitment, survivorship, confirmation, availability heuristic, to name but a few. Thus, there is a pressing need for more high-quality intervention studies. Secondly, the reporting of pacing strategies used was inconsistent and lacked detail, making it difficult to describe current approaches, or implement them in future research or symptom management strategies. Furthermore, outcome evaluations varied greatly between studies. This prevents any appropriate synthesis of research findings.

The lack of evidence concerning pacing is concerning given pacing is the only NICE recommended management strategy for ME/CFS following the 2021 update [ 34 ]. Given the analogous nature of long COVID with ME/CFS, patients and practitioners will be looking to the ME/CFS literature for guidance for symptom management. There is an urgent need for high quality studies (such as RCTs) investigating the effectiveness of pacing and better reporting of pacing intervention strategies so that clear recommendations can be made to patients. If this does not happen soon, there will be serious healthcare and economic implications for years to come [ 65 , 66 ].

Efficacy of pacing

Most studies (12/17) highlighted improvements in at least one outcome measure following pacing. Pacing was self-reported to be the most efficacious, safe, acceptable, and preferred form of activity management for people with ME/CFS [ 55 ]. Pacing was reported to improve symptoms and improve general health outcomes [ 25 , 50 , 52 , 58 , 63 ], fatigue and PEM [ 48 , 50 , 51 , 53 , 54 , 55 , 56 , 60 , 63 ], physical functioning [ 48 , 50 , 51 , 53 , 56 , 58 , 60 , 63 ], pain [ 25 , 50 , 55 , 63 ], quality of life [ 50 ], self-efficacy [ 50 , 53 ], sleep [ 53 , 55 ], and depression and anxiety [ 50 , 53 , 63 ]. These positive findings provide hope for those with ME/CFS, and other chronic fatiguing conditions such as long COVID, to improve quality of life through symptom management.

Conversely, some studies reported no effects of pacing on ME/CFS symptoms [ 52 ], fatigue, physical functioning [ 35 ], or pain scores [ 49 , 61 ]. Some studies even found pacing to have detrimental effects in those with ME/CFS, including a worsening of symptoms in 14% of survey participants recalling previous pacing experiences [ 52 ]. Furthermore, a worsening of fatigue [ 35 , 59 ], and physical functioning from pre- to post-pacing [ 35 , 57 , 59 , 61 ] was reported by the PACE trial and sub-analysis of the PACE trial [ 56 , 57 , 61 ]. The PACE trial [ 35 ], a large RCT (n = 639) comparing pacing with CBT and GET, reported GET and CBT were more effective for reducing ME/CFS-related fatigue and improving physical functioning than pacing. However, the methodology and conclusions from the PACE trial have been heavily criticised, mainly due to the authors lowering the thresholds they used to determine improvement [ 36 , 37 , 38 , 67 ]. With this in mind, Sharpe et al. [ 56 ] surveyed 75% of the participants from the PACE trial 1-year post-intervention and reported pacing improved fatigue and physical functioning, with effects similar to CBT and GET.

Lessons for pacing implementation

All pacing intervention studies (5/5) implemented educational or coaching sessions. These educational components were poorly reported in terms of the specific content and how and where they had been developed, with unclear pedagogical approaches. Consequently, even where interventions reported reduction in PEM or improved symptoms, it would be impossible to transfer that research into practice, future studies, or clinical guidance, given the ambiguity of reporting. Sessions typically contained themes of pacing such as activity adjustment (decrease, break-up, and reschedule activities based on energy levels), activity consistency (maintaining a consistently low level of activity to prevent PEM), activity planning (planning activities and rest around available energy levels), and activity progression (slowly progressing activity once maintaining a steady baseline) [ 35 , 48 , 49 , 50 , 51 ]. We feel it is pertinent to note here that although activity progression has been incorporated as a pacing strategy in these included studies, some view activity progression as a form of GET. The NICE definition of GET is “first establishing an individual's baseline of achievable exercise or physical activity, then making fixed incremental increases in the time spent being physically active” [ 34 ]. Thus, this form of pacing can also be considered a type of ‘long-term GET’ in which physical activity progression is performed over weeks or months with fixed incremental increases in time spent being physically.

Intervention studies attempted to create behaviour change, through educational programmes to modify physical activity, and plan behaviours. However, none of these studies detailed integrating any evidence-based theories of behaviour change [ 68 ] or reported using any frameworks to support behaviour change objectives. This is unfortunate since there is good evidence that theory-driven behaviour change interventions result in greater intervention effects [ 69 ]. Indeed, there is a large body of work regarding methods of behaviour change covering public health messaging, education, and intervention design, which has largely been ignored by the pacing literature. Interventions relied on subjective pacing (5/5 studies), with strategies including keeping an activity diary (3/5 studies) to identify links between activity and fatigue [ 35 , 48 , 50 ]. Given the high prevalence of ‘brain fog’ within ME/CFS [ 70 , 71 , 72 , 73 ], recall may be extremely difficult and there is significant potential for under-reporting. Other strategies included simply asking participants to estimate energy levels available for daily activities (2/5 studies [ 48 , 51 ]). Again, this is subjective and relies on participants’ ability to recall previous consequences of the activity. Other methods of activity tracking and measuring energy availability, such as wearable technology [ 74 , 75 , 76 , 77 , 78 ] could provide a more objective measure of adherence and pacing strategy fidelity in future studies. Despite technology such as accelerometers being widely accessible since well-before the earliest interventional study included in this review (which was published in 2009), none of the interventional studies utilised objective activity tracking to track pacing and provide feedback to participants. One study considered accelerometery alongside an activity diary [ 51 ]. However, accelerometery was considered the outcome variable, to assess change in activity levels from pre- to post-intervention and was not part of the intervention itself (which was one pacing coaching sessions per week for 3 weeks). Moreover, most research-grade accelerometers cannot be used as part of the intervention since they have no ability to provide continuous feedback and must be retrieved by the research team in order to access any data. Consequently, their use is mostly limited to outcome assessments only. As pacing comprises a limit to physical activity to prevent push-crash cycles, it is an astonishing observation from this scoping review that only two studies objectively measured physical activity to quantify changes to activity as a result of pacing [ 51 , 54 ]. If the aim of pacing is to reduce physical activity, or reduce variations in physical activity (i.e., push-crash cycles), only two studies have objectively quantified the effect pacing had on physical activity, so it is unclear whether pacing was successfully implemented in any of the other studies.

By exploring the pacing strategies previously used, in both intervention studies and more exploratory studies, we can identify and recommend approaches to improve symptoms of ME/CFS. These approaches can be categorised as follows: activity planning, activity consistency, activity progression, activity adjustment and staying within the Energy Envelope [ 50 , 53 , 60 , 63 ]. Activity planning was identified as a particularly effective therapeutic strategy, resulting in improvement of mean scores of all symptoms included in the APQ-28, reducing current pain, improvement of physical fatigue, mental fatigue, self-efficacy, quality of life, and mental and physical functioning [ 50 ]. Activity planning aligns with the self-regulatory behaviour change technique ‘Action Planning’ [ 79 ] which is commonly used to increase physical activity behaviour. In the case of ME/CFS, activity planning is successfully used to minimise rather than increase physical activity bouts to prevent expending too much energy and avoid PEM. Activity consistency, meaning undertaking similar amounts of activity each day, was also associated with reduced levels of depression, exercise avoidance, and higher levels of physical function [ 63 ]. Activity progression was associated with higher levels of current pain. Activity adjustment associated with depression and avoidance, and lower levels of physical function [ 63 ]. Staying within the Energy Envelope was reported to reduce PEM severity [ 53 , 60 ], improve physical functioning [ 53 , 60 ] and ME/CFS symptom scores [ 53 ], and more hours engaged in activity than individuals with lower available energy [ 53 ]. These results suggest that effective pacing strategies would include activity planning, consistency, and energy management techniques while avoiding progression. This data is, of course, limited by the small number of mostly low-quality studies and should be interpreted with some caution. Nevertheless, these are considerations that repeatedly appear in the literature and, as such, warrant deeper investigation. In addition, and as outlined earlier, most studies are relatively old, and we urgently need better insight into how modern technologies, particularly longitudinal activity tracking and contemporaneous heart-rate feedback, might improve (or otherwise) adaptive pacing. Such longitudinal tracking would also enable activities and other behaviours (sleep, diet, stress) to be linked to bouts of PEM. Linking would enable a deeper insight into potential PEM triggers and mitigations that might be possible.

The PACE trial

We feel it would be remiss of us to not specifically address the PACE trial within this manuscript, as five of the 17 included studies resulted from the PACE trial [ 35 , 56 , 57 , 59 , 61 ]. There has been considerable discussion around the PACE trial, which has been particularly divisive and controversial [ 37 , 38 , 39 , 59 , 67 , 80 , 81 ]. In the PACE trial, GET and CBT were deemed superior to pacing by the authors. Despite its size and funding, the PACE trial has received several published criticisms and rebuttals. Notably, NICE's most recent ME/CFS guideline update removed GET and CBT as suggested treatment options, which hitherto had been underpinned by the PACE findings. While we will not restate the criticisms and rebuttals here, what is not in doubt, is that the PACE trial has dominated discussions of pacing, representing almost a third of all the studies in this review. However, the trial results were published over a decade ago, with the study protocol devised almost two decades ago [ 82 ]. The intervening time has seen a revolution in the development of mobile and wearable technology and an ability to remotely track activity and provide real-time feedback in a way which was not available at that time. Furthermore, there has been no substantive research since the PACE trial that has attempted such work. Indeed, possibly driven by the reported lack of effect of pacing in the PACE trial, this review has demonstrated the dearth of progress and innovation in pacing research since its publication. Therefore, regardless of its findings or criticisms, the pacing implementation in the PACE trial is dated, and there is an urgent need for more technologically informed approaches to pacing research.

Limitations of the current evidence

The first limitation to the literature included in this scoping review is that not all studies followed the minimum data set (MDS) of patient-reported outcome measures (PROMs) agreed upon by the British Association of CFS/ME Professionals (BACME) (fatigue, sleep quality, self-efficacy, pain/discomfort, anxiety/depression, mobility, activities of daily living, self-care, and illness severity) [ 83 , 84 ]. All but one study included in this review measured illness severity, most studies included fatigue and pain/discomfort, and some studies included assessments of anxiety/depression. There was a lack of quantitative assessment of sleep quality, self-efficacy, mobility, activities of daily living, and self-care. Therefore, studies did not consistently capture the diverse nature of the symptoms experienced, with crucial domains missing from the analyses. The MDS of PROMs were established in 2012 [ 83 , 84 ] and therefore, for studies published out prior to 2012, these are not applicable [ 35 , 49 , 51 , 53 , 54 ]. However, for the 12 studies carried out after this time, the MDS should have been considered elucidate the effects of pacing on ME/CFS. Importantly, despite PEM being a central characteristic of ME/CFS, only two studies included PEM as an outcome measure [ 55 , 60 ]. This may be because of the difficulty of accurately measuring fluctuating symptoms, as PEM occurs multiple times over a period of months, and therefore pre- to post- studies and cross-sectional designs cannot adequately capture PEM incidence. Therefore, it is likely studies opted for measuring general fatigue instead. More appropriate longitudinal study designs are required to track PEM over time to capture a more representative picture of PEM patterns. Secondly, reporting of participant characteristics was inadequate, but in the studies that did describe participants, characteristics were congruent with the epidemiological literature and reporting of ME/CFS populations (i.e., 60–65% female) [ 85 ]. Therefore, in this respect, studies included herein were representative samples. However, the lack of reporting of participant characteristics limits inferences we can draw concerning any population-related effects (i.e. whether older, or male, or European, or people referred by a national health service would be more or less likely to respond positively to pacing). Thirdly, comparison groups (where included) were not ideal, with CBT or GET sometimes used as comparators to pacing [ 35 ], and often no true control group included. Penultimately, there is a distinct lack of high-quality RCTs (as mentioned throughout this manuscript). Finally, in reference to the previous section, inferences from the literature are dated and do not reflect the technological capabilities of 2023.

Recommendations for advancement of the investigative area

It is clear from the studies included in this scoping review for the last decade or more, progress and innovation in pacing research have been limited. This is unfortunate for several reasons. People with ME/CFS or long COVID are, of course, invested in their recovery. From our patient and public involvement (PPI) group engagement, it is clear many are ahead of the research and are using wearable technology to track steps, heart rate, and, in some cases, heart rate variability to improve their own pacing practice. While the lack of progress in the research means this is an understandable response by patients, it is also problematic. Without underpinning research, patients may make decisions based on an individual report of trial-and-error approaches given the lack of evidence-based guidance.

A more technologically-informed pacing approach could be implemented by integrating wearable trackers [ 77 , 78 , 86 , 87 ] to provide participants with live updates on their activity and could be integrated with research-informed messaging aimed at supporting behaviour change, as has been trialled in other research areas [ 88 , 89 , 90 , 91 ]. However, more work is needed to evaluate how to incorporate wearable activity trackers and which metrics are most helpful.

A more technologically-informed approach could also be beneficial for longitudinal symptom tracking, particularly useful given the highly variable symptom loads of ME/CFS and episodic nature of PEM. This would overcome reliance on assessments at a single point in time (as the studies within this review conducted). Similarly, mobile health (mHealth) approaches also allow questionnaires to be digitised to make it easier for participants to complete if they find holding a pen or reading small font problematic [ 92 ]. Reminders and notifications can also be helpful for patients completing tasks [ 77 , 93 , 94 , 95 ]. This approach has the added advantage of allowing contemporaneous data collection rather than relying on pre- to post-intervention designs limited by recall bias. Future work must try to leverage these approaches, as unless we collect large data sets on symptoms and behaviours (i.e. activity, diet, sleep, and pharmacology) in people with conditions like ME/CFS we will not be able to leverage emerging technologies such as AI and machine learning to improve the support and care for people with these debilitating conditions. The key areas for research outline in the NICE guidelines (2021 update) speaks to this, with specific mention of improved self-monitoring strategies, sleep strategies, and dietary strategies, all of which can be measured using mHealth approaches, in a scalable and labour-inexpensive way.

The potential for existing pacing research to address the long COVID pandemic

There is now an urgent public health need to address long COVID, with over 200 million sufferers worldwide [ 30 ]. Given the analogous symptomology between ME/CFS and long COVID, and the lack of promising treatment and management strategies in ME/CFS, pacing remains the only strategy for managing long COVID symptoms. This is concerning as the quality of evidence to support pacing is lacking. Given long COVID has reached pandemic proportions, scalable solutions will be required. In this context, we propose that technology should be harnessed to a) deliver, but also b) evaluate, pacing. We recently reported on a just-in-time adaptive intervention to increase physical activity during the pandemic [ 78 ]. However, this method could be adapted to decrease or maintain physical activity levels (i.e., pacing) in long COVID. This method has the advantage of scalability and remote data collection, reducing resource commitments and participant burden, essential for addressing a condition with so many sufferers.

This review highlights the need for more studies concerning pacing in chronic fatiguing conditions. Future studies would benefit from examining pacing’s effect on symptomology and PEM with objectively quantified pacing, over a longer duration of examination, using the MDS. It is essential this is conducted as an RCT, given that in the case of long COVID, participants may improve their health over time, and it is necessary to determine whether pacing exerts an additional effect over time elapsing. Future studies would benefit from digitising pacing to support individuals with varying symptom severity and personalise support. This would improve accessibility and reduce selection bias, in addition to improving scalability of interventions. Finally, clinicians and practitioners should be cognisant of the strength of evidence reported in this review and should exert caution when promoting pacing in their patients, given the varying methods utilised herein.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Activity Pacing Questionnaire

Beck Anxiety Inventory

Beck Depression Inventory

Brief Coping Orientation to Problems Experienced Scale

Canadian Occupational Performance Measure

Centers for disease control and prevention

Chalder Fatigue Questionnaire

Checklist Individual Strength

Chronic Pain Coping Inventory

Cognitive behavioural therapy

Cochrane Central Register of Controlled Trials

DePaul symptom questionnaire

EuroQol five-dimensions, five-levels questionnaire

Graded exercise therapy

Hospital Anxiety and Depression Scale

Myalgic encephalomyelitis/chronic fatigue syndrome

Pain Self Efficacy Questionnaire

Pain Anxiety Symptoms Scale short version

Pain Numerical Rating Scale

Patient health questionnaire

Patient reported outcome measures

Physiotherapy Evidence Database

Perceived Stress Scale

Post exertional malaise

Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for scoping reviews

Randomised control trial

McMurray JC, May JW, Cunningham MW, Jones OY. Multisystem Inflammatory Syndrome in Children (MIS-C), a post-viral myocarditis and systemic vasculitis-a critical review of its pathogenesis and treatment. Front Pediatr. 2020;8: 626182.

Article   PubMed   PubMed Central   Google Scholar  

Perrin R, Riste L, Hann M, Walther A, Mukherjee A, Heald A. Into the looking glass: post-viral syndrome post COVID-19. Med Hypotheses. 2020;144: 110055.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Hayes LD, Ingram J, Sculthorpe NF. More than 100 persistent symptoms of SARS-CoV-2 (Long COVID): A scoping review. Front Med. 2021. https://doi.org/10.3389/fmed.2021.750378 .

Article   Google Scholar  

McLaughlin M, Cerexhe C, Macdonald E, Ingram J, Sanal-Hayes NEM, Hayes LD, et al. A Cross-sectional study of symptom prevalence, frequency, severity, and impact of long-COVID in Scotland: part I. Am J Med. 2023. https://doi.org/10.1016/j.amjmed.2023.07.009 .

Article   PubMed   Google Scholar  

McLaughlin M, Cerexhe C, Macdonald E, Ingram J, Sanal-Hayes NEM, Hayes LD, et al. A cross-sectional study of symptom prevalence, frequency, severity, and impact of long-COVID in Scotland: part II. Am J Med. 2023. https://doi.org/10.1016/j.amjmed.2023.07.009 .

Hayes LD, Sanal-Hayes NEM, Mclaughlin M, Berry ECJ, Sculthorpe NF. People with long covid and ME/CFS exhibit similarly impaired balance and physical capacity: a case-case-control study. Am J Med. 2023;S0002–9343(23):00465–75.

Google Scholar  

Jenkins R. Post-viral fatigue syndrome. Epidemiology: lessons from the past. Br Med Bull. 1991;47:952–65.

Article   PubMed   CAS   Google Scholar  

Sandler CX, Wyller VBB, Moss-Morris R, Buchwald D, Crawley E, Hautvast J, et al. Long COVID and post-infective fatigue syndrome: a review. Open Forum Infect Dis. 2021;8:440.

Carod-Artal FJ. Post-COVID-19 syndrome: epidemiology, diagnostic criteria and pathogenic mechanisms involved. Rev Neurol. 2021;72:384–96.

PubMed   CAS   Google Scholar  

Ballering AV, van Zon SKR, Olde Hartman TC, Rosmalen JGM. Lifelines corona research initiative. Persistence of somatic symptoms after COVID-19 in the Netherlands: an observational cohort study. Lancet. 2022;400:452–61.

Wong TL, Weitzer DJ. Long COVID and Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS)-a systemic review and comparison of clinical presentation and symptomatology. Medicina (Kaunas). 2021;57:418.

Sukocheva OA, Maksoud R, Beeraka NM, Madhunapantula SV, Sinelnikov M, Nikolenko VN, et al. Analysis of post COVID-19 condition and its overlap with myalgic encephalomyelitis/chronic fatigue syndrome. J Adv Res. 2021. https://doi.org/10.1016/j.jare.2021.11.013 .

Bonilla H, Quach TC, Tiwari A, Bonilla AE, Miglis M, Yang P, et al. Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS) is common in post-acute sequelae of SARS-CoV-2 infection (PASC): results from a post-COVID-19 multidisciplinary clinic. medrxiv. 2022. https://doi.org/10.1101/2022.08.03.22278363v1 .

Twomey R, DeMars J, Franklin K, Culos-Reed SN, Weatherald J, Wrightson JG. Chronic fatigue and postexertional malaise in people living with long COVID: an observational study. Phys Ther. 2022;102:005.

Barhorst EE, Boruch AE, Cook DB, Lindheimer JB. Pain-related post-exertional malaise in myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) and fibromyalgia: a systematic review and three-level meta-analysis. Pain Med. 2022;23:1144–57.

Goudsmit EM. The psychological aspects and management of chronic fatigue syndrome [Internet] [Thesis]. Brunel University, School of Social Sciences; 1996 [cited 2022 Jan 20]. https://scholar.google.co.uk/scholar_url?url=https://bura.brunel.ac.uk/bitstream/2438/4283/1/FulltextThesis.pdf&hl=en&sa=X&ei=kNYjZdeuA4-8ywTAmKmADQ&scisig=AFWwaeZvdxcuHmzGL08L3jp-QwNn&oi=scholarr . Accessed 2 Aug 2022

Stussman B, Williams A, Snow J, Gavin A, Scott R, Nath A, et al. Characterization of post-exertional malaise in patients with myalgic encephalomyelitis/chronic fatigue syndrome. Front Neurol. 2020;11:1025.

Holtzman CS, Bhatia KP, Cotler J, La J. Assessment of Post-Exertional Malaise (PEM) in Patients with Myalgic Encephalomyelitis (ME) and Chronic Fatigue Syndrome (CFS): a patient-driven survey. Diagnostics. 2019. https://doi.org/10.3390/diagnostics9010026 .

Fukuda K, Straus SE, Hickie I, Sharpe MC, Dobbins JG, Komaroff A. The chronic fatigue syndrome: a comprehensive approach to its definition and study. International Chronic Fatigue Syndrome Study Group. Ann Intern Med. 1994;121:953–9.

Carruthers BM, van de Sande MI, De Meirleir KL, Klimas NG, Broderick G, Mitchell T, et al. Myalgic encephalomyelitis: international consensus criteria. J Intern Med. 2011;270:327–38.

Carruthers JD, Lowe NJ, Menter MA, Gibson J, Eadie N, Botox Glabellar Lines II Study Group. Double-blind, placebo-controlled study of the safety and efficacy of botulinum toxin type A for patients with glabellar lines. Plast Reconstr Surg. 2003;112:1089–98.

Jason LA, Jordan K, Miike T, Bell DS, Lapp C, Torres-Harding S, et al. A pediatric case definition for myalgic encephalomyelitis and chronic fatigue syndrome. J Chronic Fatigue Syndrome. 2006;13:1–44.

Goudsmit EM, Nijs J, Jason LA, Wallman KE. Pacing as a strategy to improve energy management in myalgic encephalomyelitis/chronic fatigue syndrome: a consensus document. Disabil Rehabil. 2012;34:1140–7.

Antcliff D, Keenan A-M, Keeley P, Woby S, McGowan L. Engaging stakeholders to refine an activity pacing framework for chronic pain/fatigue: a nominal group technique. Musculoskeletal Care. 2019;17:354–62.

Antcliff D, Keeley P, Campbell M, Woby S, McGowan L. Exploring patients’ opinions of activity pacing and a new activity pacing questionnaire for chronic pain and/or fatigue: a qualitative study. Physiotherapy. 2016;102:300–7.

Yoshiuchi K, Cook DB, Ohashi K, Kumano H, Kuboki T, Yamamoto Y, et al. A real-time assessment of the effect of exercise in chronic fatigue syndrome. Physiol Behav. 2007;92:963–8.

Davenport TE, Stevens SR, Baroni K, Van Ness M, Snell CR. Diagnostic accuracy of symptoms characterising chronic fatigue syndrome. Disabil Rehabil. 2011;33:1768–75.

Nurek M, Rayner C, Freyer A, Taylor S, Järte L, MacDermott N, Delaney BC, Panellists D, et al. Recommendations for the recognition, diagnosis, and management of long COVID: a Delphi study. Br J Gen Pract. 2021. https://doi.org/10.3399/BJGP.2021.0265 .

Herrera JE, Niehaus WN, Whiteson J, Azola A, Baratta JM, Fleming TK, Kim SY, Naqvi H, Sampsel S, Silver JK, Gutierrez MV, Maley J, Herman E, Abramoff Benjamin, et al. Multidisciplinary collaborative consensus guidance statement on the assessment and treatment of fatigue in postacute sequelae of SARS-CoV-2 infection (PASC) patients. PM & R. 2021. https://doi.org/10.1002/pmrj.12684 .

Chen C, Haupert SR, Zimmermann L, Shi X, Fritsche LG, Mukherjee B. Global prevalence of post COVID-19 condition or long COVID: a meta-analysis and systematic review. J Infect Dis. 2022. https://doi.org/10.1093/infdis/jiac136 .

Office for National Statistics. Prevalence of ongoing symptoms following coronavirus (COVID-19) infection in the UK [Internet]. https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/bulletins/prevalenceofongoingsymptomsfollowingcoronaviruscovid19infectionintheuk/7july2022 . Accessed 2 Aug 2022

Office for National Statistics. Prevalence of ongoing symptoms following coronavirus (COVID-19) infection in the UK [Internet]. [cited 2022 Apr 1]. https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/bulletins/prevalenceofongoingsymptomsfollowingcoronaviruscovid19infectionintheuk/3march2022

Baker R, Shaw EJ. Diagnosis and management of chronic fatigue syndrome or myalgic encephalomyelitis (or encephalopathy): summary of NICE guidance. BMJ. 2007;335:446–8.

NICE. Overview | Myalgic encephalomyelitis (or encephalopathy)/chronic fatigue syndrome: diagnosis and management | Guidance | NICE [Internet]. NICE; [cited 2022 Aug 22]. https://www.nice.org.uk/guidance/ng206 . Accessed 2 Aug 2022

White P, Goldsmith K, Johnson A, Potts L, Walwyn R, DeCesare J, et al. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. The Lancet. 2011;377:823–36.

Article   CAS   Google Scholar  

Vink M. PACE trial authors continue to ignore their own null effect. J Health Psychol. 2017;22:1134–40.

Petrie K, Weinman J. The PACE trial: it’s time to broaden perceptions and move on. J Health Psychol. 2017;22:1198–200.

Stouten B. PACE-GATE: an alternative view on a study with a poor trial protocol. J Health Psychol. 2017;22:1192–7.

Agardy S. Chronic fatigue syndrome patients have no reason to accept the PACE trial results: response to Keith J Petrie and John Weinman. J Health Psychol. 2017;22:1206–8.

Kim D-Y, Lee J-S, Park S-Y, Kim S-J, Son C-G. Systematic review of randomized controlled trials for chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME). J Transl Med. 2020;18:7.

Twisk FNM, Maes M. A review on cognitive behavorial therapy (CBT) and graded exercise therapy (GET) in myalgic encephalomyelitis (ME) / chronic fatigue syndrome (CFS): CBT/GET is not only ineffective and not evidence-based, but also potentially harmful for many patients with ME/CFS. Neuro Endocrinol Lett. 2009;30:284–99.

PubMed   Google Scholar  

Mays N, Roberts E, Popay J. Synthesising research evidence. In: Fulop N, Allen P, Clarke A, Black N, editors. Studying the organisation and delivery of health services: research methods. London: Routledge; 2001. p. 188–220.

Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18:143.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8:19–32.

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169:467–73.

de Morton NA. The PEDro scale is a valid measure of the methodological quality of clinical trials: a demographic study. Aust J Physiother. 2009;55:129–33.

Maher CG, Sherrington C, Herbert RD, Moseley AM, Elkins M. Reliability of the PEDro scale for rating quality of randomized controlled trials. Phys Ther. 2003;83:713–21.

Kos D, van Eupen I, Meirte J, Van Cauwenbergh D, Moorkens G, Meeus M, et al. Activity pacing self-management in chronic fatigue syndrome: a randomized controlled trial. Am J Occup Ther. 2015;69:6905290020.

Meeus M, Nijs J, Van Oosterwijck J, Van Alsenoy V, Truijen S. Pain physiology education improves pain beliefs in patients with chronic fatigue syndrome compared with pacing and self-management education: a double-blind randomized controlled trial. Arch Phys Med Rehabil. 2010;91:1153–9.

Antcliff D, Keenan A-M, Keeley P, Woby S, McGowan L. Testing a newly developed activity pacing framework for chronic pain/fatigue: a feasibility study. BMJ Open. 2021;11: e045398.

Nijs J, van Eupen I, Vandecauter J, Augustinus E, Bleyen G, Moorkens G, et al. Can pacing self-management alter physical behavior and symptom severity in chronic fatigue syndrome? A case series. J Rehabil Res Dev. 2009;46:985–96.

Geraghty K, Hann M, Kurtev S. Myalgic encephalomyelitis/chronic fatigue syndrome patients’ reports of symptom changes following cognitive behavioural therapy, graded exercise therapy and pacing treatments: Analysis of a primary survey compared with secondary surveys. J Health Psychol. 2019;24:1318–33.

Jason L, Muldowney K, Torres-Harding S. The energy envelope theory and myalgic encephalomyelitis/chronic fatigue syndrome. AAOHN J. 2008;56:189–95.

Brown M, Khorana N, Jason LA. The role of changes in activity as a function of perceived available and expended energy in non-pharmacological treatment outcomes for ME/CFS. J Clin Psychol. 2011;67:253.

Association ME. ME/CFS illness management survey results:‘“No decisions about me without me.” Part 1: Results and in-depth analysis of the 2012 ME association patient survey examining the acceptability, efficacy and safety of cognitive behavioural therapy, graded exercise therapy and pacing, as interventions used as management strategies for ME/CFS. 2015. https://www.meassociation.org.uk/wp-content/uploads/NO-DECISIONS-WITHOUT-ME-report.docx . Accessed 2 Feb 2022

Sharpe M, Goldsmith KA, Johnson AL, Chalder T, Walker J, White PD. Rehabilitative treatments for chronic fatigue syndrome: long-term follow-up from the PACE trial. The Lancet Psychiatry. 2015;2:1067–74.

White PD, Goldsmith K, Johnson AL, Chalder T, Sharpe M. Recovery from chronic fatigue syndrome after treatments given in the PACE trial. Psychol Med. 2013;43:2227–35.

O’connor K, Sunnquist M, Nicholson L, Jason LA, Newton JL, Strand EB. Energy envelope maintenance among patients with myalgic encephalomyelitis and chronic fatigue syndrome: Implications of limited energy reserves. Chronic Illn. 2019;15:51–60.

Dougall D, Johnson A, Goldsmith K, Sharpe M, Angus B, Chalder T, et al. Adverse events and deterioration reported by participants in the PACE trial of therapies for chronic fatigue syndrome. J Psychosom Res. 2014;77:20–6.

Brown AA, Evans MA, Jason LA. Examining the energy envelope and associated symptom patterns in chronic fatigue syndrome: does coping matter? Chronic Illn. 2013;9:302–11.

Bourke JH, Johnson AL, Sharpe M, Chalder T, White PD. Pain in chronic fatigue syndrome: response to rehabilitative treatments in the PACE trial. Psychol Med. 2014;44:1545–52.

Jason LA, Brown M, Brown A, Evans M, Flores S, Grant-Holler E, et al. Energy conservation/envelope theory interventions to help patients with myalgic encephalomyelitis/chronic fatigue syndrome. Fatigue. 2013;1:27–42.

Antcliff D, Campbell M, Woby S, Keeley P. Activity pacing is associated with better and worse symptoms for patients with long-term conditions. Clin J Pain. 2017;33:205–14.

Nijs T, Klein Y, Mousavi S, Ahsan A, Nowakowska S, Constable E, et al. The different faces of 4’-Pyrimidinyl-Functionalized 4,2’:6’,4"-Terpyridin es: metal-organic assemblies from solution and on Au(111) and Cu(111) surface platforms. J Am Chem Soc. 2018;140:2933–9.

Cutler DM, Summers LH. The COVID-19 pandemic and the $16 Trillion Virus. JAMA. 2020;324:1495–6.

Cutler DM. The costs of long COVID. JAMA Health Forum. 2022;3:e221809–e221809.

Geraghty K. ‘PACE-Gate’: when clinical trial evidence meets open data access. J Health Psychol. 2017;22:1106–12.

Davis R, Campbell R, Hildon Z, Hobbs L, Michie S. Theories of behaviour and behaviour change across the social and behavioural sciences: a scoping review. Health Psychol Rev. 2015;9:323–44.

Prestwich A, Sniehotta FF, Whittington C, Dombrowski SU, Rogers L, Michie S. Does theory influence the effectiveness of health behavior interventions? Meta-analysis Health Psychol. 2014;33:465–74.

Balinas C, Eaton-Fitch N, Maksoud R, Staines D, Marshall-Gradisnik S. Impact of life stressors on Myalgic encephalomyelitis/chronic fatigue syndrome symptoms: an Australian longitudinal study. Int J Environ Res Public Health. 2021;18:10614.

McGregor NR, Armstrong CW, Lewis DP, Gooley PR. Post-exertional malaise is associated with hypermetabolism, hypoacetylation and purine metabolism deregulation in ME/CFS cases. Diagnostics. 2019;9:70.

Nacul LC, Lacerda EM, Campion P, Pheby D, de Drachler M, Leite JC, et al. The functional status and well being of people with myalgic encephalomyelitis/chronic fatigue syndrome and their carers. BMC Public Health. 2011;11:402.

Deumer U-S, Varesi A, Floris V, Savioli G, Mantovani E, López-Carrasco P, et al. Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS): an overview. J Clin Med. 2021. https://doi.org/10.3390/jcm10204786 .

Düking P, Giessing L, Frenkel MO, Koehler K, Holmberg H-C, Sperlich B. Wrist-worn wearables for monitoring heart rate and energy expenditure while sitting or performing light-to-vigorous physical activity: validation study. JMIR Mhealth Uhealth. 2020;8: e16716.

Falter M, Budts W, Goetschalckx K, Cornelissen V, Buys R. Accuracy of apple watch measurements for heart rate and energy expenditure in patients with cardiovascular disease: cross-sectional study. JMIR Mhealth Uhealth. 2019;7: e11889.

Fuller D, Colwell E, Low J, Orychock K, Tobin MA, Simango B, et al. Reliability and validity of commercially available wearable devices for measuring steps, energy expenditure, and heart rate: systematic review. JMIR Mhealth Uhealth. 2020;8: e18694.

Mair JL, Hayes LD, Campbell AK, Sculthorpe N. Should we use activity tracker data from smartphones and wearables to understand population physical activity patterns? J Measur Phys Behav. 2022;1:1–5.

Mair JL, Hayes LD, Campbell AK, Buchan DS, Easton C, Sculthorpe N. A personalized smartphone-delivered just-in-time adaptive intervention (JitaBug) to increase physical activity in older adults: mixed methods feasibility study. JMIR Formative Res. 2022;6: e34662.

Michie S, Richardson M, Johnston M, Abraham C, Francis J, Hardeman W, et al. The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behavior change interventions. Ann Behav Med. 2013;46:81–95.

Feehan SM. The PACE trial in chronic fatigue syndrome. The Lancet. 2011;377:1831–2.

Giakoumakis J. The PACE trial in chronic fatigue syndrome. The Lancet. 2011;377:1831.

White PD, Sharpe MC, Chalder T, DeCesare JC, Walwyn R, PACE trial group. Protocol for the PACE trial: a randomised controlled trial of adaptive pacing, cognitive behaviour therapy, and graded exercise, as supplements to standardised specialist medical care versus standardised specialist medical care alone for patients with the chronic fatigue syndrome/myalgic encephalomyelitis or encephalopathy. BMC Neurol. 2007;7:6.

Reuben DB, Tinetti ME. Goal-oriented patient care–an alternative health outcomes paradigm. N Engl J Med. 2012;366:777–9.

Roberts D. Chronic fatigue syndrome and quality of life. PROM. 2018;9:253–62.

Valdez AR, Hancock EE, Adebayo S, Kiernicki DJ, Proskauer D, Attewell JR, et al. Estimating prevalence, demographics, and costs of ME/CFS using large scale medical claims data and machine learning. Front Pediatr. 2019. https://doi.org/10.3389/fped.2018.00412 .

Greiwe J, Nyenhuis SM. Wearable technology and how this can be implemented into clinical practice. Curr Allergy Asthma Rep. 2020;20:36.

Sun S, Folarin AA, Ranjan Y, Rashid Z, Conde P, Stewart C, et al. Using smartphones and wearable devices to monitor behavioral changes during COVID-19. J Med Internet Res. 2020;22: e19992.

Hardeman W, Houghton J, Lane K, Jones A, Naughton F. A systematic review of just-in-time adaptive interventions (JITAIs) to promote physical activity. Int J Behav Nutr Phys Act. 2019;16:31.

Perski O, Hébert ET, Naughton F, Hekler EB, Brown J, Businelle MS. Technology-mediated just-in-time adaptive interventions (JITAIs) to reduce harmful substance use: a systematic review. Addiction. 2022;117:1220–41.

AhmedS A, van Luenen S, Aslam S, van Bodegom D, Chavannes NH. A systematic review on the use of mHealth to increase physical activity in older people. Clinical eHealth. 2020;3:31–9.

Valenzuela T, Okubo Y, Woodbury A, Lord SR, Delbaere K. Adherence to technology-based exercise programs in older adults: a systematic review. J Geriatric Phys Ther. 2018;41:49–61.

Bowling A. Mode of questionnaire administration can have serious effects on data quality. J Public Health. 2005;27:281–91.

Burns SP, Terblanche M, Perea J, Lillard H, DeLaPena C, Grinage N, et al. mHealth intervention applications for adults living with the effects of stroke: a scoping review. Arch Rehabil Res Clin Transl. 2021;3: 100095.

Vandelanotte C, Müller AM, Short CE, Hingle M, Nathan N, Williams SL, et al. Past, present, and future of eHealth and mHealth research to improve physical activity and dietary behaviors. J Nutr Educ Behav. 2016;48:219-228.e1.

Ludwig K, Arthur R, Sculthorpe N, Fountain H, Buchan DS. Text messaging interventions for improvement in physical activity and sedentary behavior in youth: systematic review. JMIR Mhealth Uhealth. 2018;6:e10799.

Download references

Acknowledgements

We have no acknowledgements to make.

Open access funding provided by Swiss Federal Institute of Technology Zurich. This work was supported by grants from the National Institute for Health and Care Research (COV-LT2-0010) and the funder had no role in the conceptualisation, design, data collection, analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and affiliations.

Sport and Physical Activity Research Institute, School of Health and Life Sciences, University of the West of Scotland, Glasgow, UK

Nilihan E. M. Sanal-Hayes, Marie Mclaughlin, Lawrence D. Hayes, David Carless, Rachel Meach & Nicholas F. Sculthorpe

Future Health Technologies, Singapore-ETH Centre, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore, Singapore

Jacqueline L. Mair

Saw Swee Hock School of Public Health, National University of Singapore, Singapore, Singapore

Long COVID Scotland, 12 Kemnay Place, Aberdeen, UK

Jane Ormerod

Physios for ME, London, UK

Natalie Hilliard

School of Education and Social Sciences, University of the West of Scotland, Glasgow, UK

Joanne Ingram

School of Health and Society, University of Salford, Salford, UK

Nilihan E. M. Sanal-Hayes

School of Sport, Exercise & Rehabilitation Sciences, University of Hull, Hull, UK

Marie Mclaughlin

You can also search for this author in PubMed   Google Scholar

Contributions

Authors’ contributions are given according to the CRediT taxonomy as follows: Conceptualization, N.E.M.S–H., M.M., L.D.H, and N.F.S.; methodology, N.E.M.S–H., M.M., L.D.H., and N.F.S.; software, N.E.M.S–H., M.M., L.D.H., and N.F.S.B.; validation, N.E.M.S–H., M.M., L.D.H, and N.F.S.; formal analysis, N.E.M.S–H., M.M., L.D.H., and N.F.S.; investigation, N.E.M.S–H., M.M., L.D.H., and N.F.S.; resources, L.D.H., J.O., D.C., N.H., J.L.M., and N.F.S.; data curation, N.E.M.S.-H., M.M., L.D.H., and N.F.S.; writing—original draft preparation, N.E.M.S.-H., M.M., L.D.H., and N.F.S.; writing—review and editing, N.E.M.S–H., M.M., L.D.H., J.O., D.C., N.H., R.M., J.L.M., J.I., and N.F.S.; visualisation, N.E.M.S–H. and M.M., supervision, N.F.S; project administration, N.E.M.S–H., M.M., L.D.H., and N.F.S.; funding acquisition, L.D.H., J.O., D.C., N.H., J.L.M., J.I., and N.F.S. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Jacqueline L. Mair .

Ethics declarations

Ethical approval and content to participate.

This manuscript did not involve human participants, data, or tissues, so did not require ethical approval.

Consent for publication

This paper does not contain any individual person’s data in any form.

Competing interests

We report no financial and non-financial competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Supplementary file 1. Full search string for databse searching.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Sanal-Hayes, N.E.M., Mclaughlin, M., Hayes, L.D. et al. A scoping review of ‘Pacing’ for management of Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS): lessons learned for the long COVID pandemic. J Transl Med 21 , 720 (2023). https://doi.org/10.1186/s12967-023-04587-5

Download citation

Received : 30 June 2023

Accepted : 03 October 2023

Published : 14 October 2023

DOI : https://doi.org/10.1186/s12967-023-04587-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Myalgic encephalomyelitis
  • Chronic fatigue syndrome
  • Post-exertional malaise

Journal of Translational Medicine

ISSN: 1479-5876

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what type of research study is a survey

Home

The Influence of Pricing and Advertising Claims on Greenwashing Detection Among American Consumers in the Fast Fashion Industry

      As consumers continue to demand sustainability in the fashion industry, the practice of greenwashing is growing in discussion among scholars. This study seeks to analyze the relationship between type of greenwashing claim (vague vs. false) and price level (low vs. high) in the context of the fast fashion industry. Through an experimental 2 x 2 between-subjects survey design, data was collected from 152 American consumers of all ages and education levels. The stimuli consisted of images of basic T-shirts accompanied by clothing labels with the experimental conditions depicted upon them. The data was analyzed using descriptive statistics and tests of ANOVA (analyses of variance). Results show that American consumers are unable to detect differences between vague and false advertising claims in the fashion industry, regardless of the price level. Moreover, the data suggest that American consumers are unable to detect the presence of greenwashing irrespective of the type of claim or price level in this industry, which future research should further investigate. Marketers can use these and related future findings to appropriately advertise and price their clothing products. If future studies similarly conclude that American consumers are deceived by greenwashing claims, such findings can be used to support regulatory legislation.

IMAGES

  1. Survey Research

    what type of research study is a survey

  2. Understanding the 3 Main Types of Survey Research & Putting Them to Use

    what type of research study is a survey

  3. Survey Research: Definition, Examples & Methods

    what type of research study is a survey

  4. Types of Research

    what type of research study is a survey

  5. Types of Research

    what type of research study is a survey

  6. Types of Research Methodology: Uses, Types & Benefits

    what type of research study is a survey

VIDEO

  1. 20

  2. How to Conduct & Publish a Survey Study

  3. Descriptive Research definition, types, and its use in education

  4. all type research available Rajampet

  5. Surveying Types

  6. Survey report of psychology

COMMENTS

  1. Survey Research

    Survey Research. Definition: Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

  2. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  3. Survey Research: Definition, Types & Methods

    Exploratory research. Exploratory research is an important part of any marketing or business strategy. Its focus is on the discovery of ideas and insights as opposed to collecting statistically accurate data. That is why exploratory research is best suited as the beginning of your total research plan. It is most commonly used for further ...

  4. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  5. Survey Research: Types, Examples & Methods

    Qualitative research methods include one-on-one interviews, observation, case studies, and focus groups. Survey Research Scales. Nominal Scale; This is a type of survey research scale that uses numbers to label the different answer options in a survey.

  6. Survey Research: Definition, Examples and Methods

    Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...

  7. Survey Research: Definition, Examples & Methods

    Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall.. As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions.

  8. A Short Introduction to Survey Research

    Regardless of its type, survey research involves the systematic collection of information from individuals using standardized procedures. When conducting survey research, the researcher normally uses a (random or representative) sample from the population she wants to study and asks the survey subjects one or several questions about attitudes ...

  9. A Comprehensive Guide to Survey Research Methodologies

    A survey is a research method that is used to collect data from a group of respondents in order to gain insights and information regarding a particular subject. It's an excellent method to gather opinions and understand how and why people feel a certain way about different situations and contexts. ‍.

  10. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  11. What types of studies are there?

    There are various types of scientific studies such as experiments and comparative analyses, observational studies, surveys, or interviews. The choice of study type will mainly depend on the research question being asked. When making decisions, patients and doctors need reliable answers to a number of questions.

  12. Understanding the 3 Main Types of Survey Research ...

    Descriptive Research. The first main type of survey research is descriptive research. This type is centered on describing, as its name suggests, a topic of study. This can be a population, an occurrence or a phenomenon. Descriptive research is often the first type of research applied around a research issue, because it paints a picture of a ...

  13. PDF Survey Research

    This chapter describes a research methodology that we believe has much to offer social psychologists in- terested in a multimethod approach: survey research. Survey research is a specific type of field study that in- volves the collection of data from a sample of ele- ments (e.g., adult women) drawn from a well-defined

  14. Types of Research Designs Compared

    Types of Research Designs Compared | Guide & Examples. Published on June 20, 2019 by Shona McCombes.Revised on June 22, 2023. When you start planning a research project, developing research questions and creating a research design, you will have to make various decisions about the type of research you want to do.. There are many ways to categorize different types of research.

  15. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  16. PDF Fundamentals of Survey Research Methodology

    study, but cannot be explicitly controlled by the researcher. Before conducting the survey, the researcher must predicate a model that identifies the expected relationships among these variables. The survey is then constructed to test this model against observations of the phenomena. In contrast to survey research, a . survey. is simply a data ...

  17. What Is a Research Design

    Step 2: Choose a type of research design. Within both qualitative and quantitative approaches, there are several types of research design to choose from. ... You can choose just one data collection method, or use several methods in the same study. Survey methods. Surveys allow you to collect data about opinions, behaviors, experiences, ...

  18. Types of surveys with examples

    There are other types of surveys like random sample surveys (to understand public opinion or attitude) and self-selected type of studies. LEARN ABOUT: Candidate Experience Survey. Types of a survey based on deployment methods: 1. Online surveys: One of the most popular types is an online survey. With technology advancing many folds with each ...

  19. Understanding the 3 Main Types of Research Surveys

    August 21, 2023. Surveys play a vital role in collecting essential information. If you understand the different types of research surveys, you'll be able to collect more meaningful data. In this blog post, we'll walk you through the fundamentals of three main survey types: exploratory, descriptive, and causal. We'll give you insights into ...

  20. 6 Basic Types of Research Studies (Plus Pros and Cons)

    Here are six common types of research studies, along with examples that help explain the advantages and disadvantages of each: 1. Meta-analysis. A meta-analysis study helps researchers compile the quantitative data available from previous studies. It's an observational study in which the researchers don't manipulate variables.

  21. Chapter 13 Methods for Survey Studies

    The survey is a popular means of gauging people's opinion of a particular topic, such as their perception or reported use of an eHealth system. Yet surveying as a scientific approach is often misconstrued. And while a survey seems easy to conduct, ensuring that it is of high quality is much more difficult to achieve. Often the terms "survey" and "questionnaire" are used ...

  22. Mapping the global geography of cybercrime with the World Cybercrime

    The survey asked participants to consider five major types of cybercrime-Technical products/services; Attacks and extortion; Data/identity theft; Scams; and Cashing out/money laundering-and nominate the countries that they consider to be the most significant sources of each of these cybercrime types. Participants then rated each nominated ...

  23. World-first "Cybercrime Index" ranks countries by cybercrime threat

    Co-author of the study, Dr Miranda Bruce from the University of Oxford and UNSW Canberra said the study will enable the public and private sectors to focus their resources on key cybercrime hubs and spend less time and funds on cybercrime countermeasures in countries where the problem is not as significant. 'The research that underpins the Index will help remove the veil of anonymity around ...

  24. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  25. 35 Content Marketing Statistics You Should Know

    In a 2022 Statista Research Study of marketers worldwide, 62% of respondents emphasized the importance of being "always on" for their customers, while 23% viewed content-led communications as ...

  26. A quick guide to survey research

    Within the medical realm, there are three main types of survey: epidemiological surveys, surveys on attitudes to a health service or intervention and questionnaires assessing knowledge on a particular issue or topic. 1. Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs ...

  27. International Journal of Applied Technologies in Library and

    This study was to investigate the Availability and Utilization of Information Resources for Students in University of Calabar Library Cross River State, Nigeria. To achieve the purpose of this study, three research questions were formulated to guide the study. Literature review was done according to the variables under study. Survey research design was adopted for the study.

  28. A scoping review of 'Pacing' for management of Myalgic

    Eleven studies reported benefits of pacing, four studies reported no effect, and two studies reported a detrimental effect in comparison to the control group. Highly variable study designs and outcome measures, allied to poor to fair methodological quality resulted in heterogenous findings and highlights the requirement for more research ...

  29. The Influence of Pricing and Advertising Claims on Greenwashing

    As consumers continue to demand sustainability in the fashion industry, the practice of greenwashing is growing in discussion among scholars. This study seeks to analyze the relationship between type of greenwashing claim (vague vs. false) and price level (low vs. high) in the context of the fast fashion industry. Through an experimental 2 x 2 between-subjects survey design,