Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence
  • Market Research
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Survey Data Analysis & Reporting

Try Qualtrics for free

Survey data analysis and best practices for reporting.

20 min read Data can do beautiful things, but turning your survey results into clear, compelling analysis isn’t always a straightforward task. We’ve collected our tips for survey analysis along with a beginner’s guide to survey data and analysis tools.

What is survey data analysis?

Survey analysis is the process of turning the raw material of your survey data into insights and answers you can use to improve things for your business. It’s an essential part of doing survey-based research .

There are a huge number of survey data analysis methods available, from simple cross-tabulation , where data from your survey responses is arranged into rows and columns that make it easier to understand, to statistical methods for survey data analysis which tell you things you could never work out on your own, such as whether the results you’re seeing have statistical significance.

Get your free Qualtrics account now

Types of survey data

Different kinds of survey questions yield data in different forms. Here’s a quick guide to a few of them. Often, survey data will belong to more than one of these categories as they frequently overlap.

Quantitative data vs. qualitative data

What’s the difference between qualitative data and quantitative data?

  • Quantitative data, aka numerical data, involves numerical values and quantities. An example of quantitative data would be the number of times a customer has visited a location, the temperature of a city or the scores achieved in an NPS survey .
  • Qualitative data is information that isn’t numerical. It may be verbal or visual, or consist of spoken audio or video. It’s more likely to be descriptive or subjective, although it doesn’t have to be. Qualitative data highlights the “why” behind the what.

Analysis reporting ma

Closed-ended questions

These are questions with a limited range of responses. They could be a ‘yes’ or ‘no’ question such as ‘do you live in Portland, OR?’. Closed-ended questions can also take the form of multiple-choice, ranking, or drop-down menu items. Respondents can’t qualify their choice between the options or explain why they chose which one they did.

This type of question produces structured data that is easy to sort, code and quantify since the responses will fit into a limited number of ‘buckets’. However, its simplicity means you lose out on some of the finer details that respondents could have provided.

Natural language data (open-ended questions)

Answers written in the respondent’s own words are also a form of survey data. This type of response is usually given in open field (text box) question formats. Questions might begin with ‘how,’ ‘why,’ ‘describe…’ or other conversational phrases that encourage the respondent to open up.

This type of data, known as unstructured data , is rich in information. It typically requires advanced tools such as Natural Language Processing and sentiment analysis to extract the full value from how the respondents answered, because of its complexity and volume.

Categorical (nominal) data

This kind of data exists in categories that have no hierarchical relationship to each other. No item is treated as being more or less, better or worse, than the others. Examples would be primary colors (red v. blue), genders (male v female) or brand names (Chrysler v Mitsubishi).

Multiple choice questions often produce this kind of data (though not always).

Ordinal data

Unlike categorical data, ordinal data has an intrinsic rank that relates to quantity or quality, such as degrees of preference, or how strongly someone agrees or disagrees with a statement.

Likert scales and ranking scales often serve up this kind of data.

Likert Scale

Scalar data

Like ordinal data, scalar data deals with quantity and quality on a relative basis, with some items ranking above others. What makes it different is that it uses an established scale, such as age (expressed as a number), test scores (out of 100), or time (in days, hours, minutes etc.)

You might get this kind of data from a drop-down or sliding scale question format, among others.

The type of data you receive affects the kind of survey results analysis you’ll be doing, so it’s very important to consider the type of survey data you will end up with when you’re writing your survey questions and designing survey flows .

Steps to analyze your survey data

Here’s an overview of how you can analyze survey data, identify trends and hopefully draw meaningful conclusions from your research.

1.   Review your research questions

Research questions are the underlying questions your survey seeks to answer. Research questions are not the same as the questions in your questionnaire , although they may cover similar ground.

It’s important to review your research questions before you analyze your survey data to determine if it aligns with what you want to accomplish and find out from your data.

2.   Cross-tabulate your data

Cross-tabulation is a valuable step in sifting through your data and uncovering its meaning. When you cross-tabulate, you’re breaking out your data according to the sub-groups within your research population or your sample, and comparing the relationship between one variable and another. The table you produce will give you an overall picture of how responses vary among your subgroups.

Target the survey questions that best address your research question. For example, if you want to know how many people would be interested in buying from you in the future, cross-tabulating the data will help you see whether some groups were more likely than others to want to return. This gives you an idea of where to focus your efforts when improving your product design or your customer experience .

Cross Tabulation

Cross-tabulation works best for categorical data and other types of structured data. You can cross-tabulate your data in multiple ways across different questions and sub-groups using survey analysis software . Be aware, though, that slicing and dicing your data very finely will give you a smaller sample size, which then affects the reliability of your results.

1.   Review and investigate your results

Put your results in context – how have things changed since the last time you researched these kinds of questions? Do your findings tie in to changes in your market or other research done within your company?

Look at how different demographics within your sample or research population have answered, and compare your findings to other data on these groups. For example, does your survey analysis tell you something about why a certain group is purchasing less, or more? Does the data tell you anything about how well your company is meeting strategic goals, such as changing brand perceptions or appealing to a younger market?

Look at quantitative measures too. Which questions were answered the most? Which ones produced the most polarized responses? Were there any questions with very skewed data? This could be a clue to issues with survey design .

2.   Use statistical analysis to check your findings

Statistics give you certainty (or as close to it as you can get) about the results of your survey. Statistical tools like T-test, regression and ANOVA help you make sure that the results you’re seeing have statistical significance and aren’t just there by chance.

Statistical tools can also help you determine which aspects of your data are most important, and what kinds of relationships – if any – they have with one another.

Benchmarking your survey data

One of the most powerful aspects of survey data analysis is its ability to build on itself. By repeating market research surveys at different points in time, you can not only use it to uncover insights from your results, but to strengthen those insights over time.

Using consistent types of data and methods of analysis means you can use your initial results as a benchmark for future research . What’s changed year-on-year? Has your survey data followed a steady rise, performed a sudden leap or fallen incrementally? Over time, all these questions become answerable when you listen regularly and analyze your data consistently.

Maintaining your question and data types and your data analysis methods means you achieve a like-for-like measurement of results over time. And if you collect data consistently enough to see patterns and processes emerging, you can use these to make predictions about future events and outcomes.

Another benefit of data analysis over time is that you can compare your results with other people’s, provided you are using the same measurements and metrics. A classic example is NPS (Net Promoter Score) , which has become a standard measurement of customer experience that companies typically track over time.

How to present survey results

Most data isn’t very friendly to the human eye or brain in its raw form. Survey data analysis helps you turn your data into something that’s accessible, intuitive, and even interesting to a wide range of people.

1.   Make it visual

You can present data in a visual form, such as a chart or graph, or put it into a tabular form so it’s easy for people to see the relationships between variables in your crosstab analysis. Choose a graphic format that best suits your data type and clearly shows the results to the untrained eye. There are plenty of options, including linear graphs, bar graphs, Venn diagrams, word clouds and pie charts. If time and budget allows, you can create an infographic or animation.

2.   Keep language human

You can express discoveries in plain language, for example, in phrases like “customers in the USA consistently preferred potato chips to corn chips.” Adding direct quotes from your natural language data (provided respondents have consented to this) can add immediacy and illustrate your points.

3.   Tell the story of your research

Another approach is to express data using the power of storytelling, using a beginning-middle-end or situation-crisis-resolution structure to talk about how trends have emerged or challenges have been overcome. This helps people understand the context of your research and why you did it the way you did.

4.   Include your insights

As well as presenting your data in terms of numbers and proportions, always be sure to share the insights it has produced too. Insights come when you apply knowledge and ideas to the data in the survey, which means they’re often more striking and easier to grasp than the data by itself. Insights may take the form of a recommended action , or examine how two different data points are connected.

CX text IQ

Common mistakes in analyzing data and how to avoid them

1.   being too quick to interpret survey results.

It’s easy to get carried away when the data seems to show the results you were expecting or confirms a hypothesis you started with. This is why it’s so important to use statistics to make sure your survey report is statistically significant, i.e. based on reality, not a coincidence. Remember that a skewed or coincidental result becomes more likely with a smaller sample size.

2.   Treating correlation like causation

You may have heard the phrase “correlation is not causation” before. It’s well-known for a reason: mistaking a link between two independent variables as a causal relationship between them is a common pitfall in research. Results can correlate without one having a direct effect on the other.

An example is when there is another common variable involved that isn’t measured and acts as a kind of missing link between the correlated variables. Sales of sunscreen might go up in line with the number of ice-creams sold at the beach, but it’s not because there’s something about ice-cream that makes people more vulnerable to getting sunburned. It’s because a third variable – sunshine – affects both sunscreen use and ice-cream sales.

3.   Missing the nuances in qualitative natural language data

Human language is complex, and analyzing survey data in the form of speech or text isn’t as straightforward as mapping vocabulary items to positive or negative codes. The latest AI solutions go further, uncovering meaning, emotion and intent within human language.

Trusting your rich qualitative data to an AI’s interpretation means relying on the software’s ability to understand language in the way a human would, taking into account things like context and conversational dynamics. If you’re investing in software to analyze natural language data in your surveys, make sure it’s capable of sentiment analysis that uses machine learning to get a deeper understanding of what survey respondents are trying to tell you.

Tools for survey analysis

If you’re planning to run an ongoing data insights program (and we recommend that you do), it’s important to have tools on hand that make it easy and efficient to perform your research and extract valuable insights from the results.

It’s even better if those tools help you to share your findings with the right people, at the right time, in a format that works for them. Here are a few attributes to look for in a survey analysis software platform.

  • Easy to use (for non-experts) Look for software that demands minimal training or expertise, and you’ll save time and effort while maximizing the number of people who can pitch in on your experience management program . User-friendly drag-and-drop interfaces, straightforward menus, and automated data analysis are all worth looking out for.
  • Works on any platform Don’t restrict your team to a single place where software is located on a few terminals. Instead, choose a cloud-based platform that’s optimized for mobile, desktop, tablet and more.
  • Integrates with your existing setup Stand-alone analysis tools create additional work you shouldn’t have to do. Why export, convert, paste and print out when you can use a software tool that plugs straight into your existing systems via API?
  • Incorporates statistical analysis Choose a system that gives you the tools to not just process and present your data, but refine your survey results using statistical tools that generate deep insights and future predictions with just a few clicks.
  • Comes with first-class support The best survey data tool is one that scales with you and adapts to your goals and growth. A large part of that is having an expert team on call to answer questions, propose bespoke solutions, and help you get the most out of the service you’ve paid for.

Tips from the team at Qualtrics

We’ve run more than a few survey research programs in our time, and we have some tips to share that you may not find in the average survey data analysis guide. Here are some innovative ways to help make sure your survey analysis hits the mark, grabs attention, and provokes change.

Write the headlines

The #1 way to make your research hit the mark is to start with the end in mind. Before you even write your survey questions, make sample headlines of what the survey will discover. Sample headlines are the main data takeaways from your research. Some sample headlines might be:

  • The #1 concern that travelers have with staying at our hotel is X
  • X% of visitors to our showroom want to be approached by a salesperson within the first 10 minutes
  • Diners are X% more likely to choose our new lunch menu than our old one

You may even want to sketch out mock charts that show how the data will look in your results. If you “write” the results first, those results become a guide to help you design questions that ensure you get the data you want.

Gut Data Gut

We live in a data-driven society. Marketing is a data-driven business function. But don’t be afraid to overlap qualitative research findings onto your quantitative data . Don’t be hesitant to apply what you know in your gut with what you know from the data.

This is called “Gut Data Gut”. Check your gut, check your data, and check your gut. If you have personal experience with the research topic, use it! If you have qualitative research that supports the data, use it!

Your survey is one star in a constellation of information that combines to tell a story. Use every atom of information at your disposal. Just be sure to let your audience know when you are showing them findings from statistically significant research and when it comes from a different source.

Write a mock press release to encourage taking action

One of the biggest challenges of research is acting on it . This is sometimes called the “Knowing / Doing Gap” where an organization has a difficult time implementing truths they know.

One way you can ignite change with your research is to write a press release dated six months into the future that proudly announces all the changes as a result of your research. Maybe it touts the three new features that were added to your product. Perhaps it introduces your new approach to technical support. Maybe it outlines the improvements to your website.

After six months, gather your team and read the press release together to see how well you executed change based on the research.

Focus your research findings

Everyone consumes information differently. Some people want to fly over your findings at 30,000 feet and others want to slog through the weeds in their rubber boots. You should package your research for these different research consumer types.

Package your survey results analysis findings in 5 ways:

  • A 1-page executive summary with key insights
  • A 1-page stat sheet that ticks off the top supporting stats
  • A shareable slide deck with data visuals that can be understood as a stand-alone or by being presented in person
  • Live dashboards with all the survey data that allow team members to filter the data and dig in as deeply as they want on a DIY basis
  • The Mock Press Release (mentioned above)

How to analyze survey data

Reporting on survey results will prove the value of your work. Learn more about statistical analysis types or jump into an analysis type below to see our favorite tools of the trade:

  • Conjoint Analysis
  • CrossTab Analysis
  • Cluster Analysis
  • Factor Analysis
  • Analysis of Variance (ANOVA)

eBook: 5 Practices that Improve the Business Impact of Research

Related resources

Analysis & Reporting

Margin of error 11 min read

Data saturation in qualitative research 8 min read, thematic analysis 11 min read, behavioral analytics 12 min read, statistical significance calculator: tool & complete guide 18 min read, regression analysis 19 min read, data analysis 31 min read, request demo.

Ready to learn more about Qualtrics?

  • Privacy Policy

Research Method

Home » Survey Research – Types, Methods, Examples

Survey Research – Types, Methods, Examples

Table of Contents

Survey Research

Survey Research

Definition:

Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

Survey research can be used to answer a variety of questions, including:

  • What are people’s opinions about a certain topic?
  • What are people’s experiences with a certain product or service?
  • What are people’s beliefs about a certain issue?

Survey Research Methods

Survey Research Methods are as follows:

  • Telephone surveys: A survey research method where questions are administered to respondents over the phone, often used in market research or political polling.
  • Face-to-face surveys: A survey research method where questions are administered to respondents in person, often used in social or health research.
  • Mail surveys: A survey research method where questionnaires are sent to respondents through mail, often used in customer satisfaction or opinion surveys.
  • Online surveys: A survey research method where questions are administered to respondents through online platforms, often used in market research or customer feedback.
  • Email surveys: A survey research method where questionnaires are sent to respondents through email, often used in customer satisfaction or opinion surveys.
  • Mixed-mode surveys: A survey research method that combines two or more survey modes, often used to increase response rates or reach diverse populations.
  • Computer-assisted surveys: A survey research method that uses computer technology to administer or collect survey data, often used in large-scale surveys or data collection.
  • Interactive voice response surveys: A survey research method where respondents answer questions through a touch-tone telephone system, often used in automated customer satisfaction or opinion surveys.
  • Mobile surveys: A survey research method where questions are administered to respondents through mobile devices, often used in market research or customer feedback.
  • Group-administered surveys: A survey research method where questions are administered to a group of respondents simultaneously, often used in education or training evaluation.
  • Web-intercept surveys: A survey research method where questions are administered to website visitors, often used in website or user experience research.
  • In-app surveys: A survey research method where questions are administered to users of a mobile application, often used in mobile app or user experience research.
  • Social media surveys: A survey research method where questions are administered to respondents through social media platforms, often used in social media or brand awareness research.
  • SMS surveys: A survey research method where questions are administered to respondents through text messaging, often used in customer feedback or opinion surveys.
  • IVR surveys: A survey research method where questions are administered to respondents through an interactive voice response system, often used in automated customer feedback or opinion surveys.
  • Mixed-method surveys: A survey research method that combines both qualitative and quantitative data collection methods, often used in exploratory or mixed-method research.
  • Drop-off surveys: A survey research method where respondents are provided with a survey questionnaire and asked to return it at a later time or through a designated drop-off location.
  • Intercept surveys: A survey research method where respondents are approached in public places and asked to participate in a survey, often used in market research or customer feedback.
  • Hybrid surveys: A survey research method that combines two or more survey modes, data sources, or research methods, often used in complex or multi-dimensional research questions.

Types of Survey Research

There are several types of survey research that can be used to collect data from a sample of individuals or groups. following are Types of Survey Research:

  • Cross-sectional survey: A type of survey research that gathers data from a sample of individuals at a specific point in time, providing a snapshot of the population being studied.
  • Longitudinal survey: A type of survey research that gathers data from the same sample of individuals over an extended period of time, allowing researchers to track changes or trends in the population being studied.
  • Panel survey: A type of longitudinal survey research that tracks the same sample of individuals over time, typically collecting data at multiple points in time.
  • Epidemiological survey: A type of survey research that studies the distribution and determinants of health and disease in a population, often used to identify risk factors and inform public health interventions.
  • Observational survey: A type of survey research that collects data through direct observation of individuals or groups, often used in behavioral or social research.
  • Correlational survey: A type of survey research that measures the degree of association or relationship between two or more variables, often used to identify patterns or trends in data.
  • Experimental survey: A type of survey research that involves manipulating one or more variables to observe the effect on an outcome, often used to test causal hypotheses.
  • Descriptive survey: A type of survey research that describes the characteristics or attributes of a population or phenomenon, often used in exploratory research or to summarize existing data.
  • Diagnostic survey: A type of survey research that assesses the current state or condition of an individual or system, often used in health or organizational research.
  • Explanatory survey: A type of survey research that seeks to explain or understand the causes or mechanisms behind a phenomenon, often used in social or psychological research.
  • Process evaluation survey: A type of survey research that measures the implementation and outcomes of a program or intervention, often used in program evaluation or quality improvement.
  • Impact evaluation survey: A type of survey research that assesses the effectiveness or impact of a program or intervention, often used to inform policy or decision-making.
  • Customer satisfaction survey: A type of survey research that measures the satisfaction or dissatisfaction of customers with a product, service, or experience, often used in marketing or customer service research.
  • Market research survey: A type of survey research that collects data on consumer preferences, behaviors, or attitudes, often used in market research or product development.
  • Public opinion survey: A type of survey research that measures the attitudes, beliefs, or opinions of a population on a specific issue or topic, often used in political or social research.
  • Behavioral survey: A type of survey research that measures actual behavior or actions of individuals, often used in health or social research.
  • Attitude survey: A type of survey research that measures the attitudes, beliefs, or opinions of individuals, often used in social or psychological research.
  • Opinion poll: A type of survey research that measures the opinions or preferences of a population on a specific issue or topic, often used in political or media research.
  • Ad hoc survey: A type of survey research that is conducted for a specific purpose or research question, often used in exploratory research or to answer a specific research question.

Types Based on Methodology

Based on Methodology Survey are divided into two Types:

Quantitative Survey Research

Qualitative survey research.

Quantitative survey research is a method of collecting numerical data from a sample of participants through the use of standardized surveys or questionnaires. The purpose of quantitative survey research is to gather empirical evidence that can be analyzed statistically to draw conclusions about a particular population or phenomenon.

In quantitative survey research, the questions are structured and pre-determined, often utilizing closed-ended questions, where participants are given a limited set of response options to choose from. This approach allows for efficient data collection and analysis, as well as the ability to generalize the findings to a larger population.

Quantitative survey research is often used in market research, social sciences, public health, and other fields where numerical data is needed to make informed decisions and recommendations.

Qualitative survey research is a method of collecting non-numerical data from a sample of participants through the use of open-ended questions or semi-structured interviews. The purpose of qualitative survey research is to gain a deeper understanding of the experiences, perceptions, and attitudes of participants towards a particular phenomenon or topic.

In qualitative survey research, the questions are open-ended, allowing participants to share their thoughts and experiences in their own words. This approach allows for a rich and nuanced understanding of the topic being studied, and can provide insights that are difficult to capture through quantitative methods alone.

Qualitative survey research is often used in social sciences, education, psychology, and other fields where a deeper understanding of human experiences and perceptions is needed to inform policy, practice, or theory.

Data Analysis Methods

There are several Survey Research Data Analysis Methods that researchers may use, including:

  • Descriptive statistics: This method is used to summarize and describe the basic features of the survey data, such as the mean, median, mode, and standard deviation. These statistics can help researchers understand the distribution of responses and identify any trends or patterns.
  • Inferential statistics: This method is used to make inferences about the larger population based on the data collected in the survey. Common inferential statistical methods include hypothesis testing, regression analysis, and correlation analysis.
  • Factor analysis: This method is used to identify underlying factors or dimensions in the survey data. This can help researchers simplify the data and identify patterns and relationships that may not be immediately apparent.
  • Cluster analysis: This method is used to group similar respondents together based on their survey responses. This can help researchers identify subgroups within the larger population and understand how different groups may differ in their attitudes, behaviors, or preferences.
  • Structural equation modeling: This method is used to test complex relationships between variables in the survey data. It can help researchers understand how different variables may be related to one another and how they may influence one another.
  • Content analysis: This method is used to analyze open-ended responses in the survey data. Researchers may use software to identify themes or categories in the responses, or they may manually review and code the responses.
  • Text mining: This method is used to analyze text-based survey data, such as responses to open-ended questions. Researchers may use software to identify patterns and themes in the text, or they may manually review and code the text.

Applications of Survey Research

Here are some common applications of survey research:

  • Market Research: Companies use survey research to gather insights about customer needs, preferences, and behavior. These insights are used to create marketing strategies and develop new products.
  • Public Opinion Research: Governments and political parties use survey research to understand public opinion on various issues. This information is used to develop policies and make decisions.
  • Social Research: Survey research is used in social research to study social trends, attitudes, and behavior. Researchers use survey data to explore topics such as education, health, and social inequality.
  • Academic Research: Survey research is used in academic research to study various phenomena. Researchers use survey data to test theories, explore relationships between variables, and draw conclusions.
  • Customer Satisfaction Research: Companies use survey research to gather information about customer satisfaction with their products and services. This information is used to improve customer experience and retention.
  • Employee Surveys: Employers use survey research to gather feedback from employees about their job satisfaction, working conditions, and organizational culture. This information is used to improve employee retention and productivity.
  • Health Research: Survey research is used in health research to study topics such as disease prevalence, health behaviors, and healthcare access. Researchers use survey data to develop interventions and improve healthcare outcomes.

Examples of Survey Research

Here are some real-time examples of survey research:

  • COVID-19 Pandemic Surveys: Since the outbreak of the COVID-19 pandemic, surveys have been conducted to gather information about public attitudes, behaviors, and perceptions related to the pandemic. Governments and healthcare organizations have used this data to develop public health strategies and messaging.
  • Political Polls During Elections: During election seasons, surveys are used to measure public opinion on political candidates, policies, and issues in real-time. This information is used by political parties to develop campaign strategies and make decisions.
  • Customer Feedback Surveys: Companies often use real-time customer feedback surveys to gather insights about customer experience and satisfaction. This information is used to improve products and services quickly.
  • Event Surveys: Organizers of events such as conferences and trade shows often use surveys to gather feedback from attendees in real-time. This information can be used to improve future events and make adjustments during the current event.
  • Website and App Surveys: Website and app owners use surveys to gather real-time feedback from users about the functionality, user experience, and overall satisfaction with their platforms. This feedback can be used to improve the user experience and retain customers.
  • Employee Pulse Surveys: Employers use real-time pulse surveys to gather feedback from employees about their work experience and overall job satisfaction. This feedback is used to make changes in real-time to improve employee retention and productivity.

Survey Sample

Purpose of survey research.

The purpose of survey research is to gather data and insights from a representative sample of individuals. Survey research allows researchers to collect data quickly and efficiently from a large number of people, making it a valuable tool for understanding attitudes, behaviors, and preferences.

Here are some common purposes of survey research:

  • Descriptive Research: Survey research is often used to describe characteristics of a population or a phenomenon. For example, a survey could be used to describe the characteristics of a particular demographic group, such as age, gender, or income.
  • Exploratory Research: Survey research can be used to explore new topics or areas of research. Exploratory surveys are often used to generate hypotheses or identify potential relationships between variables.
  • Explanatory Research: Survey research can be used to explain relationships between variables. For example, a survey could be used to determine whether there is a relationship between educational attainment and income.
  • Evaluation Research: Survey research can be used to evaluate the effectiveness of a program or intervention. For example, a survey could be used to evaluate the impact of a health education program on behavior change.
  • Monitoring Research: Survey research can be used to monitor trends or changes over time. For example, a survey could be used to monitor changes in attitudes towards climate change or political candidates over time.

When to use Survey Research

there are certain circumstances where survey research is particularly appropriate. Here are some situations where survey research may be useful:

  • When the research question involves attitudes, beliefs, or opinions: Survey research is particularly useful for understanding attitudes, beliefs, and opinions on a particular topic. For example, a survey could be used to understand public opinion on a political issue.
  • When the research question involves behaviors or experiences: Survey research can also be useful for understanding behaviors and experiences. For example, a survey could be used to understand the prevalence of a particular health behavior.
  • When a large sample size is needed: Survey research allows researchers to collect data from a large number of people quickly and efficiently. This makes it a useful method when a large sample size is needed to ensure statistical validity.
  • When the research question is time-sensitive: Survey research can be conducted quickly, which makes it a useful method when the research question is time-sensitive. For example, a survey could be used to understand public opinion on a breaking news story.
  • When the research question involves a geographically dispersed population: Survey research can be conducted online, which makes it a useful method when the population of interest is geographically dispersed.

How to Conduct Survey Research

Conducting survey research involves several steps that need to be carefully planned and executed. Here is a general overview of the process:

  • Define the research question: The first step in conducting survey research is to clearly define the research question. The research question should be specific, measurable, and relevant to the population of interest.
  • Develop a survey instrument : The next step is to develop a survey instrument. This can be done using various methods, such as online survey tools or paper surveys. The survey instrument should be designed to elicit the information needed to answer the research question, and should be pre-tested with a small sample of individuals.
  • Select a sample : The sample is the group of individuals who will be invited to participate in the survey. The sample should be representative of the population of interest, and the size of the sample should be sufficient to ensure statistical validity.
  • Administer the survey: The survey can be administered in various ways, such as online, by mail, or in person. The method of administration should be chosen based on the population of interest and the research question.
  • Analyze the data: Once the survey data is collected, it needs to be analyzed. This involves summarizing the data using statistical methods, such as frequency distributions or regression analysis.
  • Draw conclusions: The final step is to draw conclusions based on the data analysis. This involves interpreting the results and answering the research question.

Advantages of Survey Research

There are several advantages to using survey research, including:

  • Efficient data collection: Survey research allows researchers to collect data quickly and efficiently from a large number of people. This makes it a useful method for gathering information on a wide range of topics.
  • Standardized data collection: Surveys are typically standardized, which means that all participants receive the same questions in the same order. This ensures that the data collected is consistent and reliable.
  • Cost-effective: Surveys can be conducted online, by mail, or in person, which makes them a cost-effective method of data collection.
  • Anonymity: Participants can remain anonymous when responding to a survey. This can encourage participants to be more honest and open in their responses.
  • Easy comparison: Surveys allow for easy comparison of data between different groups or over time. This makes it possible to identify trends and patterns in the data.
  • Versatility: Surveys can be used to collect data on a wide range of topics, including attitudes, beliefs, behaviors, and preferences.

Limitations of Survey Research

Here are some of the main limitations of survey research:

  • Limited depth: Surveys are typically designed to collect quantitative data, which means that they do not provide much depth or detail about people’s experiences or opinions. This can limit the insights that can be gained from the data.
  • Potential for bias: Surveys can be affected by various biases, including selection bias, response bias, and social desirability bias. These biases can distort the results and make them less accurate.
  • L imited validity: Surveys are only as valid as the questions they ask. If the questions are poorly designed or ambiguous, the results may not accurately reflect the respondents’ attitudes or behaviors.
  • Limited generalizability : Survey results are only generalizable to the population from which the sample was drawn. If the sample is not representative of the population, the results may not be generalizable to the larger population.
  • Limited ability to capture context: Surveys typically do not capture the context in which attitudes or behaviors occur. This can make it difficult to understand the reasons behind the responses.
  • Limited ability to capture complex phenomena: Surveys are not well-suited to capture complex phenomena, such as emotions or the dynamics of interpersonal relationships.

Following is an example of a Survey Sample:

Welcome to our Survey Research Page! We value your opinions and appreciate your participation in this survey. Please answer the questions below as honestly and thoroughly as possible.

1. What is your age?

  • A) Under 18
  • G) 65 or older

2. What is your highest level of education completed?

  • A) Less than high school
  • B) High school or equivalent
  • C) Some college or technical school
  • D) Bachelor’s degree
  • E) Graduate or professional degree

3. What is your current employment status?

  • A) Employed full-time
  • B) Employed part-time
  • C) Self-employed
  • D) Unemployed

4. How often do you use the internet per day?

  •  A) Less than 1 hour
  • B) 1-3 hours
  • C) 3-5 hours
  • D) 5-7 hours
  • E) More than 7 hours

5. How often do you engage in social media per day?

6. Have you ever participated in a survey research study before?

7. If you have participated in a survey research study before, how was your experience?

  • A) Excellent
  • E) Very poor

8. What are some of the topics that you would be interested in participating in a survey research study about?

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

9. How often would you be willing to participate in survey research studies?

  • A) Once a week
  • B) Once a month
  • C) Once every 6 months
  • D) Once a year

10. Any additional comments or suggestions?

Thank you for taking the time to complete this survey. Your feedback is important to us and will help us improve our survey research efforts.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Qualitative Research Methods

Qualitative Research Methods

Quantitative Research

Quantitative Research – Methods, Types and...

Textual Analysis

Textual Analysis – Types, Examples and Guide

Experimental Research Design

Experimental Design – Types, Methods, Guide

Triangulation

Triangulation in Research – Types, Methods and...

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 3 June 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

Survey Software & Market Research Solutions - Sawtooth Software

  • Technical Support
  • Technical Papers
  • Knowledge Base
  • Question Library

Call our friendly, no-pressure support team.

Survey Research: Definition, Methods, Examples, and More

Table of Contents

What is Survey Research?

Survey research, as a key research method of marketing research, is defined as the systematic collection and analysis of data gathered from respondent feedback through questionnaires or interviews. This primary research method is designed to gather information about individuals' opinions, behaviors, or characteristics through a series of questions or statements. 

The evolution of survey research in market research has been profound, transitioning from paper-based questionnaires posted randomly to respondent’s homes to sophisticated online platforms that offer much more convenient ways to reach the desired audience. Its importance lies not just in the breadth of data it can collect but in the depth of understanding it provides, allowing researchers and businesses alike to tap into the psyche of their target audience.

Get Started with Your Survey Research Today!

Ready for your next research study? Get access to our free survey research tool. In just a few minutes, you can create powerful surveys with our easy-to-use interface.

Start Survey Research for Free or Request a Product Tour

Reasons for Conducting Survey Research

The reasons for conducting survey research are as diverse as the questions it seeks to answer, yet they all converge on a common goal: to inform decision-making processes. Here's why survey research is pivotal:

  • Honest Feedback and Insights: Survey research offers a platform for respondents to provide candid feedback on products, services, or policies, providing businesses with critical insights into consumer satisfaction and areas for improvement.
  • Privacy and Anonymity Benefits: By ensuring respondent anonymity, surveys encourage honest and uninhibited responses, leading to more accurate and reliable data.
  • Providing a Platform for Criticism and Improvement Suggestions: Surveys open up a dialogue between businesses and their clientele, offering a structured way for criticism and suggestions to be voiced constructively.
  • Iterative Feedback Loops: The iterative nature of survey research, with its ability to be conducted periodically, helps businesses track changes in consumer behavior and preferences over time, enabling continuous improvement and adaptation. This ongoing dialogue facilitated by survey research not only enriches the business-consumer relationship but also fosters an environment of continuous learning and improvement, ensuring that businesses remain agile and responsive to the evolving needs and expectations of their target audience.

A woman sitting on a couch taking a phone call. Representing phone interviews (one of the survey research types)

Types of Survey Research Methods & Data Collection Methods

In the world of survey research a range of methods each offer unique advantages tailored to a researcher or businesses specific research goals.

Email Surveys

Email surveys represent a modern approach to data collection, utilizing email addresses stored on client databases to distribute questionnaires. This method is particularly appealing for its cost-effectiveness and efficiency, as it minimizes the financial expenditure associated with other methods. However, many businesses only hold email addresses relating to their current customer base, meaning that any studies performed using this approach will be limited in scope.

Online Panels

Online panels represent the most convenient form of online research. Panel companies source a wide variety of potential respondents which are available for any company to survey on a cost-per-interview (CPI) basis. However, this convenience comes with drawbacks as online panels are known for having potential data quality issues which are likely to impact the results of your survey if not guarded against.

Phone Surveys (CATI)

Computer Assisted Telephone Interviewing (CATI) combines the efficiency of computer-guided surveys with the personal touch of telephone communication. This method is advantageous for its ability to cover wide populations, including those in remote areas, ensuring a broader demographic reach. The direct interaction between the interviewer and respondent can also enhance response rates and clarity on questions. However, personal engagement comes at a cost, making CATI more time-consuming and expensive than online methods. 

Face-to-Face Interviews

The most traditional method, face-to-face interviews, involves direct, in-person interaction between the interviewer and the respondent. This approach is highly valued for its high response rates and the depth of insight it can provide, including non-verbal cues that offer additional layers of understanding. Although this method is resource-intensive, requiring significant investment in trained personnel and logistics, the quality of data obtained can be unmatched. 

Survey Research Timeframe Methods

Longitudinal Survey Research tracks the same group of respondents over time, offering invaluable insights into trends and changes in behaviors or attitudes. This method is ideal for observing long-term patterns, such as the impact of societal changes on individual behaviors. 

Cross-sectional / Ad-hoc Survey Research provides a snapshot of a population at a specific point in time, making it perfect for capturing immediate insights across various demographics. This method's versatility is showcased in applications ranging from consumer satisfaction surveys to public opinion polls, where understanding the current state of affairs is crucial. 

Each of these survey research methods brings its own strengths to the table, allowing researchers to tailor their approach to the specific nuances of their study objectives. By selecting the method that best aligns with their goals, researchers can maximize the effectiveness of their data collection efforts, paving the way for impactful insights and informed decision-making.

Uses and Examples of Survey Research

Survey research's versatility allows it to be applied across a myriad of fields, offering insights that drive decision-making and strategic planning. Its applications range from gauging public opinion and consumer preferences to evaluating the effectiveness of policies and programs.

Marketing Research

In marketing research, survey research is pivotal in understanding consumer behavior, preferences, and satisfaction levels. For example, a retail company may conduct online surveys to determine customer satisfaction with its products and services. The feedback collected can highlight areas of success and identify opportunities for improvement, guiding the company in refining its offerings and enhancing the customer experience.

Free Survey Maker Tool

Get access to our free and intuitive survey maker. In just a few minutes, you can create powerful surveys with its easy-to-use interface.

Try our Free Survey Maker or Request a Product Tour

Political Polling

Political polling represents another significant application of survey research, providing insights into voter attitudes, preferences, and likely behaviors. These surveys can influence campaign strategies, policy development, and understanding of public sentiment on various issues. A notable instance is the use of survey research during electoral campaigns to track the popularity of candidates and the effectiveness of their messages.

Public Health Research

Public health studies frequently utilize survey research to assess health behaviors, awareness of health issues, and the impact of health interventions. For example, a cross-sectional survey might be conducted to evaluate the effectiveness of a public health campaign aimed at reducing smoking rates. The data gathered can inform health officials about the campaign's impact and guide future public health strategies.

Educational Research

Educational research also benefits from survey methods, with studies designed to evaluate educational interventions, student satisfaction, and learning outcomes. For instance, longitudinal surveys can track students' academic progress over time, providing insights into the effectiveness of educational programs and interventions.

These examples underscore the adaptability of survey research, enabling tailored approaches to collecting and analyzing data across various sectors. Its capacity to yield actionable insights makes it an invaluable tool in the pursuit of knowledge and improvement.

Advantages and Disadvantages

Survey research is a powerful tool in the arsenal of researchers, offering numerous advantages while also presenting certain challenges that must be navigated carefully.

Advantages of Survey Research

  • Cost-Effectiveness: Survey research is often more affordable than other data collection methods, especially beneficial when targeting large populations.
  • Large Sample Sizes: It enables the collection of data from a large sample size (audience), enhancing the generalizability of findings.
  • Flexibility in Design: Surveys allow for customization in question formats, delivery methods, and structure, tailoring the approach to specific research needs.
  • Ease of Administration: With options for online, mail, phone, and in-person surveys, administration can be adapted to best reach the target audience.
  • Efficient Data Analysis: The quantitative nature of survey responses facilitates straightforward analysis using statistical software, aiding in the quick identification of trends and insights.

Disadvantages of Survey Research

  • Response Bias: The potential for respondents to provide socially desirable answers rather than truthful ones can lead to biased data .
  • Sampling Issues: Challenges such as non-response bias and difficulty in reaching certain populations can compromise the representativeness of the sample.
  • Questionnaire Design Challenges: Crafting questions that are clear and unbiased while avoiding ambiguity is complex and can impact the validity of the results.
  • Lack of Response Context: Surveys may not capture the nuances behind responses, limiting understanding of the reasons behind certain behaviors or opinions.
  • Time and Resource Constraints: Designing, administering, and analyzing surveys can be resource-intensive, potentially limiting their scope and depth.
  • Data Quality: The rise of survey panels has increased the likelihood of either poor quality responses, or even automated bots, affecting survey results.

Understanding these advantages and disadvantages is crucial for researchers as they design and implement survey research studies. By carefully considering these factors, it is possible to leverage the strengths of survey research while mitigating its limitations, ensuring the collection of valuable and actionable insights.

Survey Research Design Process

The design and execution of survey research involve several critical steps, each contributing to the overall quality and reliability of the findings. By following a structured process, researchers can ensure that their survey research effectively meets its objectives.

  • Define Survey Research Objectives: The first step involves clearly defining what you aim to achieve with your survey. Objectives should be specific, measurable, achievable, relevant, and time-bound (SMART). This clarity guides the subsequent steps of the survey design process.
  • Identify Your Target Audience: Knowing who you need to survey is crucial. The target audience should align with the research objectives, ensuring that the data collected is relevant and insightful.
  • Select the Appropriate Method: Based on the objectives and the target audience, choose the most suitable survey method. Consider factors such as budget, time constraints, and the need for depth vs. breadth of data.
  • Plan and Execute the Study: This involves crafting the survey questionnaire, deciding on the distribution method (online, mail, phone, face-to-face), and determining the timeline for data collection. Ensuring questions are clear, unbiased, and relevant is critical to gathering valuable data.
  • Analyze Data and Make Decisions: Once data collection is complete, analyze the responses to identify trends, patterns, and insights. Use statistical software for quantitative analysis and consider qualitative methods for open-ended responses. The findings should inform decision-making processes, guiding strategic planning and interventions.

By following these steps, researchers can maximize the effectiveness and reliability of their survey research, paving the way for meaningful insights and informed decision-making.

Sampling Methods in Survey Research

A crucial aspect of survey research is selecting a representative sample from the target population . The sampling method plays a significant role in the quality and generalizability of the research findings. There are two main types of sampling methods: probability sampling and non-probability sampling.

  • Probability Sampling: This method ensures every member of the target population has a known and equal chance of being selected. Types of probability sampling include simple random sampling, stratified random sampling, and cluster sampling. This method is preferred for its ability to produce representative samples, allowing for generalizations about the population from the sample data.
  • Non-Probability Sampling: In non-probability sampling, not every member of the population has a known or equal chance of selection. This category includes convenience sampling, quota sampling, and purposive sampling. While less rigorous than probability sampling, non-probability methods are often used when time and resources are limited or when specific, targeted insights are required.

Choosing the right sampling method is critical to the success of survey research. For example, a market research firm aiming to understand consumer preferences across different demographics might use stratified random sampling to ensure that the sample accurately reflects the population's diversity. Conversely, a preliminary study exploring a new phenomenon might opt for convenience sampling to quickly gather initial insights.

Understanding the strengths and limitations of each sampling method allows researchers to make informed choices, balancing rigor with practical constraints to best achieve their research objectives.

Need Sample for Your Research?

Let us connect you with your ideal audience! Reach out to us to request sample for your survey research.

Request Sample

Survey research provides invaluable insights across diverse fields, from consumer behavior to public policy. Its flexibility, cost-effectiveness, and broad reach make it an indispensable tool for researchers aiming to gather actionable data. Despite its challenges, such as response bias and sampling complexities, careful design and methodological rigor can mitigate these issues, enhancing the reliability and validity of findings.

Sawtooth Software

3210 N Canyon Rd Ste 202

Provo UT 84604-6508

United States of America

survey research analysis meaning

Support: [email protected]

Consulting: [email protected]

Sales: [email protected]

Products & Services

Support & Resources

survey research analysis meaning

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9 Survey research

Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930–40s by sociologist Paul Lazarsfeld to examine the effects of the radio on political opinion formation of the United States. This method has since become a very popular method for quantitative research in the social sciences.

The survey method can be used for descriptive, exploratory, or explanatory research. This method is best suited for studies that have individual people as the unit of analysis. Although other units of analysis, such as groups, organisations or dyads—pairs of organisations, such as buyers and sellers—are also studied using surveys, such studies often use a specific person from each unit as a ‘key informant’ or a ‘proxy’ for that unit. Consequently, such surveys may be subject to respondent bias if the chosen informant does not have adequate knowledge or has a biased opinion about the phenomenon of interest. For instance, Chief Executive Officers may not adequately know employees’ perceptions or teamwork in their own companies, and may therefore be the wrong informant for studies of team dynamics or employee self-esteem.

Survey research has several inherent strengths compared to other research methods. First, surveys are an excellent vehicle for measuring a wide variety of unobservable data, such as people’s preferences (e.g., political orientation), traits (e.g., self-esteem), attitudes (e.g., toward immigrants), beliefs (e.g., about a new law), behaviours (e.g., smoking or drinking habits), or factual information (e.g., income). Second, survey research is also ideally suited for remotely collecting data about a population that is too large to observe directly. A large area—such as an entire country—can be covered by postal, email, or telephone surveys using meticulous sampling to ensure that the population is adequately represented in a small sample. Third, due to their unobtrusive nature and the ability to respond at one’s convenience, questionnaire surveys are preferred by some respondents. Fourth, interviews may be the only way of reaching certain population groups such as the homeless or illegal immigrants for which there is no sampling frame available. Fifth, large sample surveys may allow detection of small effects even while analysing multiple variables, and depending on the survey design, may also allow comparative analysis of population subgroups (i.e., within-group and between-group analysis). Sixth, survey research is more economical in terms of researcher time, effort and cost than other methods such as experimental research and case research. At the same time, survey research also has some unique disadvantages. It is subject to a large number of biases such as non-response bias, sampling bias, social desirability bias, and recall bias, as discussed at the end of this chapter.

Depending on how the data is collected, survey research can be divided into two broad categories: questionnaire surveys (which may be postal, group-administered, or online surveys), and interview surveys (which may be personal, telephone, or focus group interviews). Questionnaires are instruments that are completed in writing by respondents, while interviews are completed by the interviewer based on verbal responses provided by respondents. As discussed below, each type has its own strengths and weaknesses in terms of their costs, coverage of the target population, and researcher’s flexibility in asking questions.

Questionnaire surveys

Invented by Sir Francis Galton, a questionnaire is a research instrument consisting of a set of questions (items) intended to capture responses from respondents in a standardised manner. Questions may be unstructured or structured. Unstructured questions ask respondents to provide a response in their own words, while structured questions ask respondents to select an answer from a given set of choices. Subjects’ responses to individual questions (items) on a structured questionnaire may be aggregated into a composite scale or index for statistical analysis. Questions should be designed in such a way that respondents are able to read, understand, and respond to them in a meaningful way, and hence the survey method may not be appropriate or practical for certain demographic groups such as children or the illiterate.

Most questionnaire surveys tend to be self-administered postal surveys , where the same questionnaire is posted to a large number of people, and willing respondents can complete the survey at their convenience and return it in prepaid envelopes. Postal surveys are advantageous in that they are unobtrusive and inexpensive to administer, since bulk postage is cheap in most countries. However, response rates from postal surveys tend to be quite low since most people ignore survey requests. There may also be long delays (several months) in respondents’ completing and returning the survey, or they may even simply lose it. Hence, the researcher must continuously monitor responses as they are being returned, track and send non-respondents repeated reminders (two or three reminders at intervals of one to one and a half months is ideal). Questionnaire surveys are also not well-suited for issues that require clarification on the part of the respondent or those that require detailed written responses. Longitudinal designs can be used to survey the same set of respondents at different times, but response rates tend to fall precipitously from one survey to the next.

A second type of survey is a group-administered questionnaire . A sample of respondents is brought together at a common place and time, and each respondent is asked to complete the survey questionnaire while in that room. Respondents enter their responses independently without interacting with one another. This format is convenient for the researcher, and a high response rate is assured. If respondents do not understand any specific question, they can ask for clarification. In many organisations, it is relatively easy to assemble a group of employees in a conference room or lunch room, especially if the survey is approved by corporate executives.

A more recent type of questionnaire survey is an online or web survey. These surveys are administered over the Internet using interactive forms. Respondents may receive an email request for participation in the survey with a link to a website where the survey may be completed. Alternatively, the survey may be embedded into an email, and can be completed and returned via email. These surveys are very inexpensive to administer, results are instantly recorded in an online database, and the survey can be easily modified if needed. However, if the survey website is not password-protected or designed to prevent multiple submissions, the responses can be easily compromised. Furthermore, sampling bias may be a significant issue since the survey cannot reach people who do not have computer or Internet access, such as many of the poor, senior, and minority groups, and the respondent sample is skewed toward a younger demographic who are online much of the time and have the time and ability to complete such surveys. Computing the response rate may be problematic if the survey link is posted on LISTSERVs or bulletin boards instead of being emailed directly to targeted respondents. For these reasons, many researchers prefer dual-media surveys (e.g., postal survey and online survey), allowing respondents to select their preferred method of response.

Constructing a survey questionnaire is an art. Numerous decisions must be made about the content of questions, their wording, format, and sequencing, all of which can have important consequences for the survey responses.

Response formats. Survey questions may be structured or unstructured. Responses to structured questions are captured using one of the following response formats:

Dichotomous response , where respondents are asked to select one of two possible choices, such as true/false, yes/no, or agree/disagree. An example of such a question is: Do you think that the death penalty is justified under some circumstances? (circle one): yes / no.

Nominal response , where respondents are presented with more than two unordered options, such as: What is your industry of employment?: manufacturing / consumer services / retail / education / healthcare / tourism and hospitality / other.

Ordinal response , where respondents have more than two ordered options, such as: What is your highest level of education?: high school / bachelor’s degree / postgraduate degree.

Interval-level response , where respondents are presented with a 5-point or 7-point Likert scale, semantic differential scale, or Guttman scale. Each of these scale types were discussed in a previous chapter.

Continuous response , where respondents enter a continuous (ratio-scaled) value with a meaningful zero point, such as their age or tenure in a firm. These responses generally tend to be of the fill-in-the blanks type.

Question content and wording. Responses obtained in survey research are very sensitive to the types of questions asked. Poorly framed or ambiguous questions will likely result in meaningless responses with very little value. Dillman (1978) [1] recommends several rules for creating good survey questions. Every single question in a survey should be carefully scrutinised for the following issues:

Is the question clear and understandable ?: Survey questions should be stated in very simple language, preferably in active voice, and without complicated words or jargon that may not be understood by a typical respondent. All questions in the questionnaire should be worded in a similar manner to make it easy for respondents to read and understand them. The only exception is if your survey is targeted at a specialised group of respondents, such as doctors, lawyers and researchers, who use such jargon in their everyday environment. Is the question worded in a negative manner ?: Negatively worded questions such as ‘Should your local government not raise taxes?’ tend to confuse many respondents and lead to inaccurate responses. Double-negatives should be avoided when designing survey questions.

Is the question ambiguous ?: Survey questions should not use words or expressions that may be interpreted differently by different respondents (e.g., words like ‘any’ or ‘just’). For instance, if you ask a respondent, ‘What is your annual income?’, it is unclear whether you are referring to salary/wages, or also dividend, rental, and other income, whether you are referring to personal income, family income (including spouse’s wages), or personal and business income. Different interpretation by different respondents will lead to incomparable responses that cannot be interpreted correctly.

Does the question have biased or value-laden words ?: Bias refers to any property of a question that encourages subjects to answer in a certain way. Kenneth Rasinky (1989) [2] examined several studies on people’s attitude toward government spending, and observed that respondents tend to indicate stronger support for ‘assistance to the poor’ and less for ‘welfare’, even though both terms had the same meaning. In this study, more support was also observed for ‘halting rising crime rate’ and less for ‘law enforcement’, more for ‘solving problems of big cities’ and less for ‘assistance to big cities’, and more for ‘dealing with drug addiction’ and less for ‘drug rehabilitation’. A biased language or tone tends to skew observed responses. It is often difficult to anticipate in advance the biasing wording, but to the greatest extent possible, survey questions should be carefully scrutinised to avoid biased language.

Is the question double-barrelled ?: Double-barrelled questions are those that can have multiple answers. For example, ‘Are you satisfied with the hardware and software provided for your work?’. In this example, how should a respondent answer if they are satisfied with the hardware, but not with the software, or vice versa? It is always advisable to separate double-barrelled questions into separate questions: ‘Are you satisfied with the hardware provided for your work?’, and ’Are you satisfied with the software provided for your work?’. Another example: ‘Does your family favour public television?’. Some people may favour public TV for themselves, but favour certain cable TV programs such as Sesame Street for their children.

Is the question too general ?: Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provided a response scale ranging from ‘not at all’ to ‘extremely well’, if that person selected ‘extremely well’, what do they mean? Instead, ask more specific behavioural questions, such as, ‘Will you recommend this book to others, or do you plan to read other books by the same author?’. Likewise, instead of asking, ‘How big is your firm?’ (which may be interpreted differently by respondents), ask, ‘How many people work for your firm?’, and/or ‘What is the annual revenue of your firm?’, which are both measures of firm size.

Is the question too detailed ?: Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household, or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.

Is the question presumptuous ?: If you ask, ‘What do you see as the benefits of a tax cut?’, you are presuming that the respondent sees the tax cut as beneficial. Many people may not view tax cuts as being beneficial, because tax cuts generally lead to lesser funding for public schools, larger class sizes, and fewer public services such as police, ambulance, and fire services. Avoid questions with built-in presumptions.

Is the question imaginary ?: A popular question in many television game shows is, ‘If you win a million dollars on this show, how will you spend it?’. Most respondents have never been faced with such an amount of money before and have never thought about it—they may not even know that after taxes, they will get only about $640,000 or so in the United States, and in many cases, that amount is spread over a 20-year period—and so their answers tend to be quite random, such as take a tour around the world, buy a restaurant or bar, spend on education, save for retirement, help parents or children, or have a lavish wedding. Imaginary questions have imaginary answers, which cannot be used for making scientific inferences.

Do respondents have the information needed to correctly answer the question ?: Oftentimes, we assume that subjects have the necessary information to answer a question, when in reality, they do not. Even if a response is obtained, these responses tend to be inaccurate given the subjects’ lack of knowledge about the question being asked. For instance, we should not ask the CEO of a company about day-to-day operational details that they may not be aware of, or ask teachers about how much their students are learning, or ask high-schoolers, ‘Do you think the US Government acted appropriately in the Bay of Pigs crisis?’.

Question sequencing. In general, questions should flow logically from one to the next. To achieve the best response rates, questions should flow from the least sensitive to the most sensitive, from the factual and behavioural to the attitudinal, and from the more general to the more specific. Some general rules for question sequencing:

Start with easy non-threatening questions that can be easily recalled. Good options are demographics (age, gender, education level) for individual-level surveys and firmographics (employee count, annual revenues, industry) for firm-level surveys.

Never start with an open ended question.

If following a historical sequence of events, follow a chronological order from earliest to latest.

Ask about one topic at a time. When switching topics, use a transition, such as, ‘The next section examines your opinions about…’

Use filter or contingency questions as needed, such as, ‘If you answered “yes” to question 5, please proceed to Section 2. If you answered “no” go to Section 3′.

Other golden rules . Do unto your respondents what you would have them do unto you. Be attentive and appreciative of respondents’ time, attention, trust, and confidentiality of personal information. Always practice the following strategies for all survey research:

People’s time is valuable. Be respectful of their time. Keep your survey as short as possible and limit it to what is absolutely necessary. Respondents do not like spending more than 10-15 minutes on any survey, no matter how important it is. Longer surveys tend to dramatically lower response rates.

Always assure respondents about the confidentiality of their responses, and how you will use their data (e.g., for academic research) and how the results will be reported (usually, in the aggregate).

For organisational surveys, assure respondents that you will send them a copy of the final results, and make sure that you follow up with your promise.

Thank your respondents for their participation in your study.

Finally, always pretest your questionnaire, at least using a convenience sample, before administering it to respondents in a field setting. Such pretesting may uncover ambiguity, lack of clarity, or biases in question wording, which should be eliminated before administering to the intended sample.

Interview survey

Interviews are a more personalised data collection method than questionnaires, and are conducted by trained interviewers using the same research protocol as questionnaire surveys (i.e., a standardised set of questions). However, unlike a questionnaire, the interview script may contain special instructions for the interviewer that are not seen by respondents, and may include space for the interviewer to record personal observations and comments. In addition, unlike postal surveys, the interviewer has the opportunity to clarify any issues raised by the respondent or ask probing or follow-up questions. However, interviews are time-consuming and resource-intensive. Interviewers need special interviewing skills as they are considered to be part of the measurement instrument, and must proactively strive not to artificially bias the observed responses.

The most typical form of interview is a personal or face-to-face interview , where the interviewer works directly with the respondent to ask questions and record their responses. Personal interviews may be conducted at the respondent’s home or office location. This approach may even be favoured by some respondents, while others may feel uncomfortable allowing a stranger into their homes. However, skilled interviewers can persuade respondents to co-operate, dramatically improving response rates.

A variation of the personal interview is a group interview, also called a focus group . In this technique, a small group of respondents (usually 6–10 respondents) are interviewed together in a common location. The interviewer is essentially a facilitator whose job is to lead the discussion, and ensure that every person has an opportunity to respond. Focus groups allow deeper examination of complex issues than other forms of survey research, because when people hear others talk, it often triggers responses or ideas that they did not think about before. However, focus group discussion may be dominated by a dominant personality, and some individuals may be reluctant to voice their opinions in front of their peers or superiors, especially while dealing with a sensitive issue such as employee underperformance or office politics. Because of their small sample size, focus groups are usually used for exploratory research rather than descriptive or explanatory research.

A third type of interview survey is a telephone interview . In this technique, interviewers contact potential respondents over the phone, typically based on a random selection of people from a telephone directory, to ask a standard set of survey questions. A more recent and technologically advanced approach is computer-assisted telephone interviewing (CATI). This is increasing being used by academic, government, and commercial survey researchers. Here the interviewer is a telephone operator who is guided through the interview process by a computer program displaying instructions and questions to be asked. The system also selects respondents randomly using a random digit dialling technique, and records responses using voice capture technology. Once respondents are on the phone, higher response rates can be obtained. This technique is not ideal for rural areas where telephone density is low, and also cannot be used for communicating non-audio information such as graphics or product demonstrations.

Role of interviewer. The interviewer has a complex and multi-faceted role in the interview process, which includes the following tasks:

Prepare for the interview: Since the interviewer is in the forefront of the data collection effort, the quality of data collected depends heavily on how well the interviewer is trained to do the job. The interviewer must be trained in the interview process and the survey method, and also be familiar with the purpose of the study, how responses will be stored and used, and sources of interviewer bias. They should also rehearse and time the interview prior to the formal study.

Locate and enlist the co-operation of respondents: Particularly in personal, in-home surveys, the interviewer must locate specific addresses, and work around respondents’ schedules at sometimes undesirable times such as during weekends. They should also be like a salesperson, selling the idea of participating in the study.

Motivate respondents: Respondents often feed off the motivation of the interviewer. If the interviewer is disinterested or inattentive, respondents will not be motivated to provide useful or informative responses either. The interviewer must demonstrate enthusiasm about the study, communicate the importance of the research to respondents, and be attentive to respondents’ needs throughout the interview.

Clarify any confusion or concerns: Interviewers must be able to think on their feet and address unanticipated concerns or objections raised by respondents to the respondents’ satisfaction. Additionally, they should ask probing questions as necessary even if such questions are not in the script.

Observe quality of response: The interviewer is in the best position to judge the quality of information collected, and may supplement responses obtained using personal observations of gestures or body language as appropriate.

Conducting the interview. Before the interview, the interviewer should prepare a kit to carry to the interview session, consisting of a cover letter from the principal investigator or sponsor, adequate copies of the survey instrument, photo identification, and a telephone number for respondents to call to verify the interviewer’s authenticity. The interviewer should also try to call respondents ahead of time to set up an appointment if possible. To start the interview, they should speak in an imperative and confident tone, such as, ‘I’d like to take a few minutes of your time to interview you for a very important study’, instead of, ‘May I come in to do an interview?’. They should introduce themself, present personal credentials, explain the purpose of the study in one to two sentences, and assure respondents that their participation is voluntary, and their comments are confidential, all in less than a minute. No big words or jargon should be used, and no details should be provided unless specifically requested. If the interviewer wishes to record the interview, they should ask for respondents’ explicit permission before doing so. Even if the interview is recorded, the interviewer must take notes on key issues, probes, or verbatim phrases

During the interview, the interviewer should follow the questionnaire script and ask questions exactly as written, and not change the words to make the question sound friendlier. They should also not change the order of questions or skip any question that may have been answered earlier. Any issues with the questions should be discussed during rehearsal prior to the actual interview sessions. The interviewer should not finish the respondent’s sentences. If the respondent gives a brief cursory answer, the interviewer should probe the respondent to elicit a more thoughtful, thorough response. Some useful probing techniques are:

The silent probe: Just pausing and waiting without going into the next question may suggest to respondents that the interviewer is waiting for more detailed response.

Overt encouragement: An occasional ‘uh-huh’ or ‘okay’ may encourage the respondent to go into greater details. However, the interviewer must not express approval or disapproval of what the respondent says.

Ask for elaboration: Such as, ‘Can you elaborate on that?’ or ‘A minute ago, you were talking about an experience you had in high school. Can you tell me more about that?’.

Reflection: The interviewer can try the psychotherapist’s trick of repeating what the respondent said. For instance, ‘What I’m hearing is that you found that experience very traumatic’ and then pause and wait for the respondent to elaborate.

After the interview is completed, the interviewer should thank respondents for their time, tell them when to expect the results, and not leave hastily. Immediately after leaving, they should write down any notes or key observations that may help interpret the respondent’s comments better.

Biases in survey research

Despite all of its strengths and advantages, survey research is often tainted with systematic biases that may invalidate some of the inferences derived from such surveys. Five such biases are the non-response bias, sampling bias, social desirability bias, recall bias, and common method bias.

Non-response bias. Survey research is generally notorious for its low response rates. A response rate of 15-20 per cent is typical in a postal survey, even after two or three reminders. If the majority of the targeted respondents fail to respond to a survey, this may indicate a systematic reason for the low response rate, which may in turn raise questions about the validity of the study’s results. For instance, dissatisfied customers tend to be more vocal about their experience than satisfied customers, and are therefore more likely to respond to questionnaire surveys or interview requests than satisfied customers. Hence, any respondent sample is likely to have a higher proportion of dissatisfied customers than the underlying population from which it is drawn. In this instance, not only will the results lack generalisability, but the observed outcomes may also be an artefact of the biased sample. Several strategies may be employed to improve response rates:

Advance notification: Sending a short letter to the targeted respondents soliciting their participation in an upcoming survey can prepare them in advance and improve their propensity to respond. The letter should state the purpose and importance of the study, mode of data collection (e.g., via a phone call, a survey form in the mail, etc.), and appreciation for their co-operation. A variation of this technique may be to ask the respondent to return a prepaid postcard indicating whether or not they are willing to participate in the study.

Relevance of content: People are more likely to respond to surveys examining issues of relevance or importance to them.

Respondent-friendly questionnaire: Shorter survey questionnaires tend to elicit higher response rates than longer questionnaires. Furthermore, questions that are clear, non-offensive, and easy to respond tend to attract higher response rates.

Endorsement: For organisational surveys, it helps to gain endorsement from a senior executive attesting to the importance of the study to the organisation. Such endorsement can be in the form of a cover letter or a letter of introduction, which can improve the researcher’s credibility in the eyes of the respondents.

Follow-up requests: Multiple follow-up requests may coax some non-respondents to respond, even if their responses are late.

Interviewer training: Response rates for interviews can be improved with skilled interviewers trained in how to request interviews, use computerised dialling techniques to identify potential respondents, and schedule call-backs for respondents who could not be reached.

Incentives : Incentives in the form of cash or gift cards, giveaways such as pens or stress balls, entry into a lottery, draw or contest, discount coupons, promise of contribution to charity, and so forth may increase response rates.

Non-monetary incentives: Businesses, in particular, are more prone to respond to non-monetary incentives than financial incentives. An example of such a non-monetary incentive is a benchmarking report comparing the business’s individual response against the aggregate of all responses to a survey.

Confidentiality and privacy: Finally, assurances that respondents’ private data or responses will not fall into the hands of any third party may help improve response rates

Sampling bias. Telephone surveys conducted by calling a random sample of publicly available telephone numbers will systematically exclude people with unlisted telephone numbers, mobile phone numbers, and people who are unable to answer the phone when the survey is being conducted—for instance, if they are at work—and will include a disproportionate number of respondents who have landline telephone services with listed phone numbers and people who are home during the day, such as the unemployed, the disabled, and the elderly. Likewise, online surveys tend to include a disproportionate number of students and younger people who are constantly on the Internet, and systematically exclude people with limited or no access to computers or the Internet, such as the poor and the elderly. Similarly, questionnaire surveys tend to exclude children and the illiterate, who are unable to read, understand, or meaningfully respond to the questionnaire. A different kind of sampling bias relates to sampling the wrong population, such as asking teachers (or parents) about their students’ (or children’s) academic learning, or asking CEOs about operational details in their company. Such biases make the respondent sample unrepresentative of the intended population and hurt generalisability claims about inferences drawn from the biased sample.

Social desirability bias . Many respondents tend to avoid negative opinions or embarrassing comments about themselves, their employers, family, or friends. With negative questions such as, ‘Do you think that your project team is dysfunctional?’, ‘Is there a lot of office politics in your workplace?’, ‘Or have you ever illegally downloaded music files from the Internet?’, the researcher may not get truthful responses. This tendency among respondents to ‘spin the truth’ in order to portray themselves in a socially desirable manner is called the ‘social desirability bias’, which hurts the validity of responses obtained from survey research. There is practically no way of overcoming the social desirability bias in a questionnaire survey, but in an interview setting, an astute interviewer may be able to spot inconsistent answers and ask probing questions or use personal observations to supplement respondents’ comments.

Recall bias. Responses to survey questions often depend on subjects’ motivation, memory, and ability to respond. Particularly when dealing with events that happened in the distant past, respondents may not adequately remember their own motivations or behaviours, or perhaps their memory of such events may have evolved with time and no longer be retrievable. For instance, if a respondent is asked to describe his/her utilisation of computer technology one year ago, or even memorable childhood events like birthdays, their response may not be accurate due to difficulties with recall. One possible way of overcoming the recall bias is by anchoring the respondent’s memory in specific events as they happened, rather than asking them to recall their perceptions and motivations from memory.

Common method bias. Common method bias refers to the amount of spurious covariance shared between independent and dependent variables that are measured at the same point in time, such as in a cross-sectional survey, using the same instrument, such as a questionnaire. In such cases, the phenomenon under investigation may not be adequately separated from measurement artefacts. Standard statistical tests are available to test for common method bias, such as Harmon’s single-factor test (Podsakoff, MacKenzie, Lee & Podsakoff, 2003), [3] Lindell and Whitney’s (2001) [4] market variable technique, and so forth. This bias can potentially be avoided if the independent and dependent variables are measured at different points in time using a longitudinal survey design, or if these variables are measured using different methods, such as computerised recording of dependent variable versus questionnaire-based self-rating of independent variables.

  • Dillman, D. (1978). Mail and telephone surveys: The total design method . New York: Wiley. ↵
  • Rasikski, K. (1989). The effect of question wording on public support for government spending. Public Opinion Quarterly , 53(3), 388–394. ↵
  • Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology , 88(5), 879–903. http://dx.doi.org/10.1037/0021-9010.88.5.879. ↵
  • Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology , 86(1), 114–121. ↵

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Insight Platforms Logo - Wordmark

What is Survey Analysis?

Survey Analysis is the process of turning the research data into actionable, business oriented insights. This article explains what survey analysis is and introduces some of the tools and methods to help you turn survey research data into relevant and actionable insights.

Not long ago, doing survey research was costly and needed a specialist agency.

Then technology made it possible for marketers and in-house researchers to conduct and analyse their own surveys – working in more agile ways.

One of the first widely accessible tools for online research and analysis, SurveyMonkey , now has over 40 million customers worldwide and 20 million questions answered daily. It also enables non-expert users to process and analyse results efficiently.

However, the ability to gather results does not guarantee you will get the absolute best from your data: data on its own means nothing without good analysis.

Value from multiple data analysis

Types of Survey Analysis

There are a few ways to analyse data to respond to different types of business needs.

Descriptive Analytics

This is the most common approach and is one of the first ways to look at survey data.

It is relatively simple and describes the data captured. It could be used for quick answers but can also be the base for hypothesis creation – prior to statistical analysis – helping to identify outliers and patterns.

Common descriptive analysis approaches include:

  • Mean, Median, Mode, Count, Percent, Frequency
  • Range, Variance, Standard Deviation
  • Percentile and Quartile ranks.

Worldometer have been displaying an example of descriptive analysis covering the global evolution of COVID cases and deaths since 2019.

Inferential Analysis:

This approach helps to preview future scenarios based on the analysis of past data.

It is recommended as a method of projecting the future behaviour of people, segments or markets. In addition, it allows you to evaluate economic fluctuations and consumption trends.

Usually based on long term survey results, this type of data mining uses several statistical tools to provide predictions:

  • Correlations: evaluate the consequences of changes in specific variables.
  • Cross-tabulation: analyse relationships between two or more variables.
  • Regression: understand the relationship between a variable’s response to other variables responses, identifying direct impact of them.

This type of analysis can also make use of Machine Learning methods.

Diagnostic Analysis

This approach looks into the causes of an event, answering questions such as:

It can be used as support and deep dive into descriptive analysis, as well as market and category insight generation.

Prescriptive Analysis

The most complex and with the highest added value, it is used to evaluate the consequences of actions taken, indicating the roadmap to achieve business goals.

It requires specific professionals and tools:

  • Data Science tools (Big Data, Machine Learning, Artificial Intelligence etc.)
  • Holistic business knowledge.
  • Ecosystem deep knowledge.

Specialist analytics companies like Machine Vantage analyse ‘big’ data from many sources alongside survey results to identify patterns that could lead to new consumer behaviours.

A great example of how it worked for marketing purpose is the insight for new Ben & Jerry’s Breakfast flavours .

For truly insightful analysis, careful preparation is crucial – from from the brief, methodology choice, questionnaire development through to data collection. It’s important to know what you are looking for before you start collecting survey data.

Survey Analysis: A Practical Example

Good survey analysis goes beyond raw percentage answers ( Descriptive Analysis ) by transforming the data into insights and actions: finding the real story.

For example, if we are to evaluate customer satisfaction of attendees of an online course.  We can simply ask them “Are you satisfied with the quality of this course?” and get an effective response:

survey research analysis meaning

It seems great, but there are still 18% unhappy customers , and we would not have any other information to help improving their experience.

Applying Survey Analysis techniques would bring us insights that could guide changes and improvement, driving these 18% customers to a happy place.

We could learn things like:

  • Why are they not satisfied?
  • Who are they?
  • Who are the satisfied customers? (to compare)

In order to analyse survey results, the first step is to dig into the data and be able to combine information, analyse subgroups and apply your own knowledge of the topic.

Let’s look into ways to leverage survey analysis to a higher level, turning data into insights through statistics.

Cross Tabulation

Understand the relationship between 2 or more questions (variables). For example, we could cross the satisfaction answer against the profile of the attendee.

survey research analysis meaning

Focus on a specific profile or subgroup, excluding the others. For example, looking into the Teachers’ group and how they have responded to other questions we can further understand their higher dissatisfaction.

survey research analysis meaning

Significance Testing

This helps you to understand whether any patterns in your data are likely to be genuine differences – or just the result of random variations.

Most survey analysis tools will identify whether differences between two numbers are statistically significant .

For example, if we say that teachers are significantly less satisfied than other groups at the 95% confidence level, it means that if we conduct the same survey 100 times, 95 times the results will be the same.

The confidence level depends on your sample size and how much the selected group represents the population you want to learn about.

Benchmark ing

You can also compare your satisfaction results with competitors, for example.

You need to include them in your survey and gather responses from audiences that have attended classes in other platforms. Then it is possible to measure your performance within the market environment.

survey research analysis meaning

Trend Analysis

If you have previous research, you can also visualise how results have changed during a specific period, and measure the improvement of business performance.

survey research analysis meaning

Correlation

Understand how 2 or more variables move together.

For instance, we could look at the relationship between the level of seniority of professionals and their satisfaction. This might tell us that that the more senior they are, the less satisfied they are with the course – suggesting we need to improve course content for this audience.

More sophisticated analysis, it is used to understand impact of one or more independent variables on a dependent variable.

You can measure the impact of session times, topics covered, lecturers into the evaluation of satisfaction. Then it is possible to adjust what will make the most influence on satisfaction scores, instead of changing several bits of the course.

Tips for Successful Survey Analysis

Structuring the process of Survey Analysis will help you to reach deeper insights quicker and more effectively.

The following steps can make sure you get the most of it:

survey research analysis meaning

1. Identify the business question

Go back to the commercial questions driving the research.

These will be the top priority to focus on. Also, bear in mind that, more than one question combined could respond to the business issue.

2. Compare sub-groups

Are there any subgroups (age, gender, location, socio-demographic segment, product usage, brand knowledge etc.) within the sample?

If so, Filter your main questions by these groups to understand patterns and differentiation among them .

Even though some opportunities appear tiny in the Total Sample view, they could lead to niche (and valuable) opportunities.

3. Validate your hypothesis

Identify if the hypothesis designed from the beginning is real, or if there are new ones arising from the data.

4. Use statistical significance tests

Whenever possible and relevant, this is how you show results are trustworthy and representative.

For that, your sample must be big enough. These options should have been thought through during planning!

5. Stay focused on your priorities

Resist temptation to analyse every bit of data within every single subgroup available. This will add time, complexity and lose the interest of your audience.

Less is really more here: more compelling, more structured and far more relevant.

6. Visualise your results clearly

The correct graphic can do wonders to engage audience and create a friendly and informative report. Also, the right display can even intensify the point.

7. Bring context

Do not be afraid to branch out and bring external information (trends, previous research and surveys, public data etc.) that will contribute and base your findings and recommendations.

8. Give your point of view

Last, but not least, conclude.

Survey Analysis is not just about putting the results into a presentation. It is about extracting insights for the business strategy .

Make the main opportunities crystal clear in your storyline. It does not need to be a “conclusions” final page, it could even be within each chart, with the correct storytelling.

Pitfalls to Avoid in Survey Analysis

Very frequently researchers find themselves surrounded with loads of data, cross tables and analysis that never seems to end.

There is always another angle or breakdown that has not been covered yet.

User Research - Photo by UX Research Indonesia on Unsplash - Insight Platforms

So, the temptation is to keep digging, and the overall process can take more time and resources than you planned for.

The quality and relevance of the survey analysis depends on the analyst knowing when and where to stop digging.

Developing and analysing a survey is a continuous process with key focus points to ensure success:

  • Have a c lear and objective business issue or question to guide development of sample, research design, methodology and consequently, survey analysis.
  • Pay attention to the quality of the questions to avoid bias, guiding responses or omitting options.
  • Have a Data processing plan to ensure that the most relevant possibilities and cross tables will be possible.
  • Focus the analysis over the main reason for the survey, to avoid over exploring.

When dealing with large amounts of data, the right survey analysis software is essential for analysing results. It will let you automate the process by analysing large amounts of data simultaneously.

There are several specialist tools available for Survey Analysis, including crunch.io , Infotools and Knowledgehound .

Some survey platforms have the analysis tools embedded, like Survey Monkey , Askia , Glow and Toluna .

You can even use tools like Microsoft Excel to calculate means, counts, percentages, or even filter information and correlate.

Finally, you may want to visualise your survey data in generic Business Intelligence platforms like Power BI and Tableau – but be careful. These software tools are not built specifically for survey analysis, and do not always let you structure your data in the right way.

survey research analysis meaning

Cynthia Portugal

Leave a comment cancel reply.

You must be logged in to post a comment.

survey research analysis meaning

Change Location

Find awesome listings near you.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.38(48); 2023 Dec 11
  • PMC10713437

Logo of jkms

Designing, Conducting, and Reporting Survey Studies: A Primer for Researchers

Olena zimba.

1 Department of Clinical Rheumatology and Immunology, University Hospital in Krakow, Krakow, Poland.

2 National Institute of Geriatrics, Rheumatology and Rehabilitation, Warsaw, Poland.

3 Department of Internal Medicine N2, Danylo Halytsky Lviv National Medical University, Lviv, Ukraine.

Armen Yuri Gasparyan

4 Departments of Rheumatology and Research and Development, Dudley Group NHS Foundation Trust (Teaching Trust of the University of Birmingham, UK), Russells Hall Hospital, Dudley, UK.

Survey studies have become instrumental in contributing to the evidence accumulation in rapidly developing medical disciplines such as medical education, public health, and nursing. The global medical community has seen an upsurge of surveys covering the experience and perceptions of health specialists, patients, and public representatives in the peri-pandemic coronavirus disease 2019 period. Currently, surveys can play a central role in increasing research activities in non-mainstream science countries where limited research funding and other barriers hinder science growth. Planning surveys starts with overviewing related reviews and other publications which may help to design questionnaires with comprehensive coverage of all related points. The validity and reliability of questionnaires rely on input from experts and potential responders who may suggest pertinent revisions to prepare forms with attractive designs, easily understandable questions, and correctly ordered points that appeal to target respondents. Currently available numerous online platforms such as Google Forms and Survey Monkey enable moderating online surveys and collecting responses from a large number of responders. Online surveys benefit from disseminating questionnaires via social media and other online platforms which facilitate the survey internationalization and participation of large groups of responders. Survey reporting can be arranged in line with related recommendations and reporting standards all of which have their strengths and limitations. The current article overviews available recommendations and presents pointers on designing, conducting, and reporting surveys.

INTRODUCTION

Surveys are increasingly popular research studies that are aimed at collecting and analyzing opinions of diverse subject groups at certain periods. Initially and predominantly employed for applied social science research, 1 surveys have maintained their social dimension and transformed into indispensable tools for analyzing knowledge, perceptions, prevalence of clinical conditions, and practices in the medical sciences. 2 In rapidly developing disciplines with social dimensions such as medical education, public health, and nursing, online surveys have become essential for monitoring and auditing healthcare and education services 3 , 4 and generating new hypotheses and research questions. 5 In non-mainstream science countries with uninterrupted Internet access, online surveys have also been praised as useful studies for increasing research activities. 6

In 2016, the Medical Subject Headings (MeSH) vocabulary of the US National Library of Medicine introduced "surveys and questionnaires" as a structured keyword, defining survey studies as "collections of data obtained from voluntary subjects" ( https://www.ncbi.nlm.nih.gov/mesh/?term=surveys+and+questionnaires ). Such studies are instrumental in the absence of evidence from randomized controlled trials, systematic reviews, and cohort studies. Tagging survey reports with this MeSH term is advisable for increasing the retrieval of relevant documents while searching through Medline, Scopus, and other global databases.

Surveys are relatively easy to conduct by distributing web-based and non-web-based questionnaires to large groups of potential responders. The ease of conduct primarily depends on the way of approaching potential respondents. Face-to-face interviews, regular postmails, e-mails, phone calls, and social media posts can be employed to reach numerous potential respondents. Digitization and social media popularization have improved the distribution of questionnaires, expanded respondents' engagement, facilitated swift data processing, and globalization of survey studies. 7

SURVEY REPORTING GUIDANCE

Despite the ease of survey studies and their importance for maintaining research activities across academic disciplines, their methodological quality, reproducibility, and implications vary widely. The deficiencies in designing and reporting are the main reason for the inefficiency of some surveys. For instance, systematic analyses of survey methodologies in nephrology, transfusion medicine, and radiology have indicated that less than one-third of related reports provide valid and reliable data. 8 , 9 , 10 Additionally, no discussions of respondents' representativeness, reasons for nonresponse, and generalizability of the results have been pinpointed as drawbacks of some survey reports. The revealed deficiencies have justified the need for survey designing and data processing in line with reporting recommendations, including those listed on the EQUATOR Network website ( https://www.equator-network.org/ ).

Arguably, survey studies lack discipline-specific and globally-acceptable reporting guidance. The diversity of surveyed subjects and populations is perhaps the main confounder. Although most questionnaires contain socio-demographic questions, there are no reporting guidelines specifically tailored to comprehensively inquire specialists across different academic disciplines, patients, and public representatives.

The EQUATOR Network platform currently lists some widely promoted documents with statements on conducting and reporting web-based and non-web-based surveys ( Table 1 ). 11 , 12 , 13 , 14 The oldest published recommendation guides on postal, face-to-face, and telephone interviews. 1 One of its critical points highlights the need to formulate a clear and explicit question/objective to run a focused survey and to design questionnaires with respondent-friendly layout and content. 1 The Checklist for Reporting Results of Internet E-Surveys (CHERRIES) is the most-used document for reporting online surveys. 11 The CHERRIES checklist included points on ensuring the reliability of online surveys and avoiding manipulations with multiple entries by the same users. 11 A specific set of recommendations, listed by the EQUATOR Network, is available for specialists who plan web-based and non-web-based surveys of knowledge, attitude, and practice in clinical medicine. 12 These recommendations help design valid questionnaires, survey representative subjects with clinical knowledge, and complete transparent reporting of the obtained results. 12

COVID-19 = coronavirus disease 2019.

From January 2018 to December 2019, three rounds of surveying experts with interest in surveys and questionnaires allowed reaching consensus on a set of points for reporting web-based and non-web-based surveys. 13 The Consensus-Based Checklist for Reporting of Survey Studies included a rating of 19 items of survey reports, from titles to acknowledgments. 13 Finally, rapid recommendations on online surveys amid the coronavirus disease 2019 (COVID-19) pandemic were published to guide the authors on how to choose social media and other online platforms for disseminating questionnaires and targeting representative groups of respondents. 14

Adhering to a combination of these recommendations is advisable to minimize the limitations of each document and increase the transparency of survey reports. For cross-sectional analyses of large sample sizes, additionally consulting the STROBE standard of the EQUATOR Network may further improve the accuracy of reporting respondents' inclusion and exclusion criteria. In fact, there are examples of online survey reports adhering to both CHERRIES and STROBE recommendations. 15 , 16

ETHICS CONSIDERATIONS

Although health research authorities in some countries lack mandates for full ethics review of survey studies, obtaining formal review protocols or ethics waivers is advisable for most surveys involving respondents from more than one country. And following country-based regulations and ethical norms of research are therefore mandatory. 14 , 17

Full ethics review or exemption procedures are important steps for planning and conducting ethically sound surveys. Given the non-interventional origin and absence of immediate health risks for participants, ethics committees may approve survey protocols without a full ethics review. 18 A full ethics review is however required when the informational and psychological harms of surveys increase the risk. 18 Informational harms may result from unauthorized access to respondents' personal data and stigmatization of respondents with leaked information about social diseases. Psychological harms may include anxiety, depression, and exacerbation of underlying psychiatric diseases.

Survey questionnaires submitted for evaluation should indicate how informed consent is obtained from respondents. 13 Additionally, information about confidentiality, anonymity, questionnaire delivery modes, compensations, and mechanisms preventing unauthorized access to questionnaires should be provided. 13 , 14 Ethical considerations and validation are especially important in studies involving vulnerable and marginalized subjects with diminished autonomy and poor social status due to dementia, substance abuse, inappropriate sexual behavior, and certain infections. 18 , 19 , 20 Precautions should be taken to avoid confidentiality breaches and bot activities when surveying via insecure online platforms. 21

Monetary compensation helps attract respondents to fill out lengthy questionnaires. However, such incentives may create mechanisms deceiving the system by surveyees with a primary interest in compensation. 22 Ethics review protocols may include points on recording online responders' IP addresses and blocking duplicate submissions from the same Internet locations. 22 IP addresses are viewed as personal information in the EU, but not in the US. Notably, IP identification may deter some potential responders in the EU. 21

PATIENT KNOWLEDGE AND PERCEPTION SURVEYS

The design of patient knowledge and perception surveys is insufficiently defined and poorly explored. Although such surveys are aimed at consistently covering research questions on clinical presentation, prevention, and treatment, more emphasis is now placed on psychometric aspects of designing related questionnaires. 23 , 24 , 25 Targeting responsive patient groups to collect reliable answers is yet another challenge that can be addressed by distributing questionnaires to patients with good knowledge of their diseases, particularly those registering with university-affiliated clinics and representing patient associations. 26 , 27 , 28

The structure of questionnaires may differ for surveys of patient groups with various age-dependent health issues. Care should be taken when children are targeted since they often report a variety of modifiable conditions such as anxiety and depression, musculoskeletal problems, and pain, affecting their quality of life. 29 Likewise, gender and age differences should be considered in questionnaires addressing the quality of life in association with mental health and social status. 30 Questionnaires for older adults may benefit from including questions about social support and assistance in the context of caring for aging diseases. 31 Finally, addressing the needs of digital technologies and home-care applications may help to ensure the completeness of questionnaires for older adults with sedentary lifestyles and mobility disabilities. 32 , 33

SOCIAL MEDIA FOR QUESTIONNAIRE DISTRIBUTION

The widespread use of social media has made it easier to distribute questionnaires to a large number of potential responders. Employing popular platforms such as Twitter and Facebook has become particularly useful for conducting nationwide surveys on awareness and concerns about global health and pandemic issues. 34 , 35 When various social media platforms are simultaneously employed, participants' sociodemographic factors such as gender, age, and level of education may confound the study results. 36 Knowing targeted groups' preferred online networking and communication sites may better direct the questionnaire distribution. 37 , 38 , 39

Preliminary evidence suggests that distributing survey links via social-media accounts of individual users and organized e-groups with interest in specific health issues may increase their engagement and correctness of responses. 40 , 41

Since surveys employing social media are publicly accessible, related questionnaires should be professionally edited to easily inquire target populations, avoid sensitive and disturbing points, and ensure privacy and confidentiality. 42 , 43 Although counting e-post views is feasible, response rates of social-media distributed questionnaires are practically impossible to record. The latter is an inherent limitation of such surveys.

SURVEY SAMPLING

Establishing connections with target populations and diversifying questionnaire dissemination may increase the rigor of current surveys which are abundantly administered. 44 Sample sizes depend on various factors, including the chosen topic, aim, and sampling strategy (random or non-random). 12 Some topics such as COVID-19 and global health may easily attract the attention of large respondent groups motivated to answer a variety of questionnaire questions. In the beginning of the pandemic, most surveys employed non-random (non-probability) sampling strategies which resulted in analyses of numerous responses without response rate calculations. These qualitative research studies were mainly aimed to analyze opinions of specialists and patients exposed to COVID-19 to develop rapid guidelines and initiate clinical trials.

Outside the pandemic, and beyond hot topics, there is a growing trend of low response rates and inadequate representation of target populations. 45 Such a trend makes it difficult to design and conduct random (probability) surveys. Subsequently, hypotheses of current online surveys often omit points on randomization and sample size calculation, ending up with qualitative analyses and pilot studies. In fact, convenience (non-random or non-probability) sampling can be particularly suitable for previously unexplored and emerging topics when overviewing literature cannot help estimate optimal samples and entirely new questionnaires should be designed and tested. The limitations of convenience sampling minimize the generalizability of the conclusions since the sample representativeness is uncertain. 45

Researchers often employ 'snowball' sampling techniques with initial surveyees forwarding the questionnaires to other interested respondents, thereby maximizing the sample size. Another common technique for obtaining more responses relies on generating regular social media reminders and resending e-mails to interested individuals and groups. Such tactics can increase the study duration but cannot exclude the participation bias and non-response.

Purposive or targeted sampling is perhaps the most precise technique when knowing the target population size and respondents' readiness to correctly fill the questionnaires and ensure an exact estimate of response rate, close to 100%. 46

DESIGNING QUESTIONNAIRES

Correctness, confidentiality, privacy, and anonymity are critical points of inquiry in questionnaires. 47 Correctly worded and convincingly presented survey invitations with consenting options and reassurances of secure data processing may increase response rates and ensure the validity of responses. 47 Online surveys are believed to be more advantageous than offline inquiries for ensuring anonymity and privacy, particularly for targeting socially marginalized and stigmatized subjects. Online study design is indeed optimal for collecting more responses in surveys of sex- and gender-related and otherwise sensitive topics.

Performing comprehensive literature reviews, consultations with subject experts, and Delphi exercises may all help to specify survey objectives, identify questionnaire domains, and formulate pertinent questions. Literature searches are required for in-depth topic coverage and identification of previously published relevant surveys. By analyzing previous questionnaire characteristics, modifications can be made to designing new self-administered surveys. The justification of new studies should correctly acknowledge similar published reports to avoid redundancies.

The initial part of a questionnaire usually includes a short introduction/preamble/cover letter that specifies the objectives, target respondents, potential benefits and risks, and moderators' contact details for further inquiries. This part may motivate potential respondents to consent and answer questions. The specifics, volume, and format of other parts are dependent on revisions in response to pretesting and pilot testing. 48 The pretesting usually involves co-authors and other contributors, colleagues with the subject interest while the pilot testing usually involves 5-10 target respondents who are well familiar with the subject and can swiftly complete the questionnaires. The guidance obtained at the pretesting and pilot testing allows editing, shortening, or expanding questionnaire sections. Although guidance on questionnaire length and question numbers is scarce, some experts empirically consider 5 domains with 5 questions in each as optimal. 12 Lengthy questionnaires may be biased due to respondents' fatigue and inability to answer numerous and complicated questions. 46

Questionnaire revisions are aimed at ensuring the validity and consistency of questions, implying the appeal to relevant responders and accurate covering of all essential points. 45 Valid questionnaires enable reliable and reproducible survey studies that end up with the same responses to variably worded and located questions. 45

Various combinations of open-ended and close-ended questions are advisable to comprehensively cover all pertinent points and enable easy and quick completion of questionnaires. Open-ended questions are usually included in small numbers since these require more time to respond. 46 Also, the interpretation and analysis of responses to open-ended questions hardly contribute to generating robust qualitative data. 49 Close-ended questions with single and multiple-choice answers constitute the main part of a questionnaire, with single answers easier to analyze and report. Questions with single answers can be presented as 3 or more Likert scales (e.g., yes/no/do not know).

Avoiding too simplistic (yes/no) questions and replacing them with Likert-scale items may increase the robustness of questionnaire analyses. 50 Additionally, constructing easily understandable questions, excluding merged items with two or more points, and moving sophisticated questions to the beginning of a questionnaire may add to the quality and feasibility of the study. 50

Survey studies are increasingly conducted by health professionals to swiftly explore opinions on a wide range of topics by diverse groups of specialists, patients, and public representatives. Arguably, quality surveys with generalizable results can be instrumental for guiding health practitioners in times of crises such as the COVID-19 pandemic when clinical trials, systematic reviews, and other evidence-based reports are scarcely available or absent. Online surveys can be particularly valuable for collecting and analyzing specialist, patient, and other subjects' responses in non-mainstream science countries where top evidence-based studies are scarce commodities and research funding is limited. Accumulated expertise in drafting quality questionnaires and conducting robust surveys is valuable for producing new data and generating new hypotheses and research questions.

The main advantages of surveys are related to the ease of conducting such studies with limited or no research funding. The digitization and social media advances have further contributed to the ease of surveying and growing global interest toward surveys among health professionals. Some of the disadvantages of current surveys are perhaps those related to imperfections of digital platforms for disseminating questionnaires and analysing responses.

Although some survey reporting standards and recommendations are available, none of these comprehensively cover all items of questionnaires and steps in surveying. None of the survey reporting standards is based on summarizing guidance of a large number of contributors involved in related research projects. As such, presenting the current guidance with a list of items for survey reports ( Table 2 ) may help better design and publish related articles.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Zimba O.
  • Formal analysis: Zimba O, Gasparyan AY.
  • Writing - original draft: Zimba O.
  • Writing - review & editing: Zimba O, Gasparyan AY.

survey research analysis meaning

Chapter 8 Survey Research: A Quantitative Technique

Why survey research.

In 2008, the voters of the United States elected our first African American president, Barack Obama. It may not surprise you to learn that when President Obama was coming of age in the 1970s, one-quarter of Americans reported that they would not vote for a qualified African American presidential nominee. Three decades later, when President Obama ran for the presidency, fewer than 8% of Americans still held that position, and President Obama won the election (Smith, 2009). Smith, T. W. (2009). Trends in willingness to vote for a black and woman for president, 1972–2008. GSS Social Change Report No. 55 . Chicago, IL: National Opinion Research Center. We know about these trends in voter opinion because the General Social Survey ( http://www.norc.uchicago.edu/GSS+Website ), a nationally representative survey of American adults, included questions about race and voting over the years described here. Without survey research, we may not know how Americans’ perspectives on race and the presidency shifted over these years.

8.1 Survey Research: What Is It and When Should It Be Used?

Learning objectives.

  • Define survey research.
  • Identify when it is appropriate to employ survey research as a data-collection strategy.

Most of you have probably taken a survey at one time or another, so you probably have a pretty good idea of what a survey is. Sometimes students in my research methods classes feel that understanding what a survey is and how to write one is so obvious, there’s no need to dedicate any class time to learning about it. This feeling is understandable—surveys are very much a part of our everyday lives—we’ve probably all taken one, we hear about their results in the news, and perhaps we’ve even administered one ourselves. What students quickly learn is that there is more to constructing a good survey than meets the eye. Survey design takes a great deal of thoughtful planning and often a great many rounds of revision. But it is worth the effort. As we’ll learn in this chapter, there are many benefits to choosing survey research as one’s method of data collection. We’ll take a look at what a survey is exactly, what some of the benefits and drawbacks of this method are, how to construct a survey, and what to do with survey data once one has it in hand.

Survey research A quantitative method for which a researcher poses the same set of questions, typically in a written format, to a sample of individuals. is a quantitative method whereby a researcher poses some set of predetermined questions to an entire group, or sample, of individuals. Survey research is an especially useful approach when a researcher aims to describe or explain features of a very large group or groups. This method may also be used as a way of quickly gaining some general details about one’s population of interest to help prepare for a more focused, in-depth study using time-intensive methods such as in-depth interviews or field research. In this case, a survey may help a researcher identify specific individuals or locations from which to collect additional data.

As is true of all methods of data collection, survey research is better suited to answering some kinds of research question more than others. In addition, as you’ll recall from Chapter 6 "Defining and Measuring Concepts" , operationalization works differently with different research methods. If your interest is in political activism, for example, you likely operationalize that concept differently in a survey than you would for a field research study of the same topic.

Key Takeaway

  • Survey research is often used by researchers who wish to explain trends or features of large groups. It may also be used to assist those planning some more focused, in-depth study.
  • Recall some of the possible research questions you came up with while reading previous chapters of this text. How might you frame those questions so that they could be answered using survey research?

8.2 Pros and Cons of Survey Research

  • Identify and explain the strengths of survey research.
  • Identify and explain the weaknesses of survey research.

Survey research, as with all methods of data collection, comes with both strengths and weaknesses. We’ll examine both in this section.

Strengths of Survey Method

Researchers employing survey methods to collect data enjoy a number of benefits. First, surveys are an excellent way to gather lots of information from many people. In my own study of older people’s experiences in the workplace, I was able to mail a written questionnaire to around 500 people who lived throughout the state of Maine at a cost of just over $1,000. This cost included printing copies of my seven-page survey, printing a cover letter, addressing and stuffing envelopes, mailing the survey, and buying return postage for the survey. I realize that $1,000 is nothing to sneeze at. But just imagine what it might have cost to visit each of those people individually to interview them in person. Consider the cost of gas to drive around the state, other travel costs, such as meals and lodging while on the road, and the cost of time to drive to and talk with each person individually. We could double, triple, or even quadruple our costs pretty quickly by opting for an in-person method of data collection over a mailed survey. Thus surveys are relatively cost effective .

Related to the benefit of cost effectiveness is a survey’s potential for generalizability . Because surveys allow researchers to collect data from very large samples for a relatively low cost, survey methods lend themselves to probability sampling techniques, which we discussed in Chapter 7 "Sampling" . Of all the data-collection methods described in this text, survey research is probably the best method to use when one hopes to gain a representative picture of the attitudes and characteristics of a large group.

Survey research also tends to be a reliable method of inquiry. This is because surveys are standardized The same questions, phrased in the same way, are posed to all participants, consistent. in that the same questions, phrased in exactly the same way, are posed to participants. Other methods, such as qualitative interviewing, which we’ll learn about in Chapter 9 "Interviews: Qualitative and Quantitative Approaches" , do not offer the same consistency that a quantitative survey offers. This is not to say that all surveys are always reliable. A poorly phrased question can cause respondents to interpret its meaning differently, which can reduce that question’s reliability. Assuming well-constructed question and questionnaire design, one strength of survey methodology is its potential to produce reliable results.

The versatility A feature of survey research meaning that many different people use surveys for a variety of purposes and in a variety of settings. of survey research is also an asset. Surveys are used by all kinds of people in all kinds of professions. I repeat, surveys are used by all kinds of people in all kinds of professions. Is there a light bulb switching on in your head? I hope so. The versatility offered by survey research means that understanding how to construct and administer surveys is a useful skill to have for all kinds of jobs. Lawyers might use surveys in their efforts to select juries, social service and other organizations (e.g., churches, clubs, fundraising groups, activist groups) use them to evaluate the effectiveness of their efforts, businesses use them to learn how to market their products, governments use them to understand community opinions and needs, and politicians and media outlets use surveys to understand their constituencies.

In sum, the following are benefits of survey research:

  • Cost-effective
  • Generalizable

Weaknesses of Survey Method

As with all methods of data collection, survey research also comes with a few drawbacks. First, while one might argue that surveys are flexible in the sense that we can ask any number of questions on any number of topics in them, the fact that the survey researcher is generally stuck with a single instrument for collecting data (the questionnaire), surveys are in many ways rather inflexible . Let’s say you mail a survey out to 1,000 people and then discover, as responses start coming in, that your phrasing on a particular question seems to be confusing a number of respondents. At this stage, it’s too late for a do-over or to change the question for the respondents who haven’t yet returned their surveys. When conducting in-depth interviews, on the other hand, a researcher can provide respondents further explanation if they’re confused by a question and can tweak their questions as they learn more about how respondents seem to understand them.

Validity can also be a problem with surveys. Survey questions are standardized; thus it can be difficult to ask anything other than very general questions that a broad range of people will understand. Because of this, survey results may not be as valid as results obtained using methods of data collection that allow a researcher to more comprehensively examine whatever topic is being studied. Let’s say, for example, that you want to learn something about voters’ willingness to elect an African American president, as in our opening example in this chapter. General Social Survey respondents were asked, “If your party nominated an African American for president, would you vote for him if he were qualified for the job?” Respondents were then asked to respond either yes or no to the question. But what if someone’s opinion was more complex than could be answered with a simple yes or no? What if, for example, a person was willing to vote for an African American woman but not an African American man? I am not at all suggesting that such a perspective makes any sense, but it is conceivable that an individual might hold such a perspective.

In sum, potential drawbacks to survey research include the following:

  • Inflexibility

Key Takeaways

  • Strengths of survey research include its cost effectiveness, generalizability, reliability, and versatility.
  • Weaknesses of survey research include inflexibility and issues with validity.
  • What are some ways that survey researchers might overcome the weaknesses of this method?
  • Find an article reporting results from survey research (remember how to use Sociological Abstracts?). How do the authors describe the strengths and weaknesses of their study? Are any of the strengths or weaknesses described here mentioned in the article?

8.3 Types of Surveys

  • Define cross-sectional surveys, provide an example of a cross-sectional survey, and outline some of the drawbacks of cross-sectional research.
  • Describe the various types of longitudinal surveys.
  • Define retrospective surveys, and identify their strengths and weaknesses.
  • Discuss some of the benefits and drawbacks of the various methods of delivering self-administered questionnaires.

There is much variety when it comes to surveys. This variety comes both in terms of time —when or with what frequency a survey is administered—and in terms of administration —how a survey is delivered to respondents. In this section we’ll take a look at what types of surveys exist when it comes to both time and administration.

In terms of time, there are two main types of surveys: cross-sectional and longitudinal. Cross-sectional surveys Surveys that are administered at one point in time. are those that are administered at just one point in time. These surveys offer researchers a sort of snapshot in time and give us an idea about how things are for our respondents at the particular point in time that the survey is administered. My own study of older workers mentioned previously is an example of a cross-sectional survey. I administered the survey at just one time.

Another example of a cross-sectional survey comes from Aniko Kezdy and colleagues’ study (Kezdy, Martos, Boland, & Horvath-Szabo, 2011) Kezdy, A., Martos, T., Boland, V., & Horvath-Szabo, K. (2011). Religious doubts and mental health in adolescence and young adulthood: The association with religious attitudes. Journal of Adolescence, 34 , 39–47. of the association between religious attitudes, religious beliefs, and mental health among students in Hungary. These researchers administered a single, one-time-only, cross-sectional survey to a convenience sample of 403 high school and college students. The survey focused on how religious attitudes impact various aspects of one’s life and health. The researchers found from analysis of their cross-sectional data that anxiety and depression were highest among those who had both strong religious beliefs and also some doubts about religion. Yet another recent example of cross-sectional survey research can be seen in Bateman and colleagues’ study (Bateman, Pike, & Butler, 2011) of how the perceived publicness of social networking sites influences users’ self-disclosures. Bateman, P. J., Pike, J. C., & Butler, B. S. (2011). To disclose or not: Publicness in social networking sites. Information Technology & People, 24 , 78–100. These researchers administered an online survey to undergraduate and graduate business students. They found that even though revealing information about oneself is viewed as key to realizing many of the benefits of social networking sites, respondents were less willing to disclose information about themselves as their perceptions of a social networking site’s publicness rose. That is, there was a negative relationship between perceived publicness of a social networking site and plans to self-disclose on the site.

One problem with cross-sectional surveys is that the events, opinions, behaviors, and other phenomena that such surveys are designed to assess don’t generally remain stagnant. Thus generalizing from a cross-sectional survey about the way things are can be tricky; perhaps you can say something about the way things were in the moment that you administered your survey, but it is difficult to know whether things remained that way for long after you administered your survey. Think, for example, about how Americans might have responded if administered a survey asking for their opinions on terrorism on September 10, 2001. Now imagine how responses to the same set of questions might differ were they administered on September 12, 2001. The point is not that cross-sectional surveys are useless; they have many important uses. But researchers must remember what they have captured by administering a cross-sectional survey; that is, as previously noted, a snapshot of life as it was at the time that the survey was administered.

One way to overcome this sometimes problematic aspect of cross-sectional surveys is to administer a longitudinal survey. Longitudinal surveys Surveys that enable a researcher to make observations over some extended period of time. are those that enable a researcher to make observations over some extended period of time. There are several types of longitudinal surveys, including trend, panel, and cohort surveys. We’ll discuss all three types here, along with another type of survey called retrospective. Retrospective surveys fall somewhere in between cross-sectional and longitudinal surveys.

The first type of longitudinal survey is called a trend survey A type of longitudinal survey where a researcher examines changes in trends over time; the same people do not necessarily participate in the survey more than once. . The main focus of a trend survey is, perhaps not surprisingly, trends. Researchers conducting trend surveys are interested in how people’s inclinations change over time. The Gallup opinion polls are an excellent example of trend surveys. You can read more about Gallup on their website: http://www.gallup.com/Home.aspx . To learn about how public opinion changes over time, Gallup administers the same questions to people at different points in time. For example, for several years Gallup has polled Americans to find out what they think about gas prices (something many of us happen to have opinions about). One thing we’ve learned from Gallup’s polling is that price increases in gasoline caused financial hardship for 67% of respondents in 2011, up from 40% in the year 2000. Gallup’s findings about trends in opinions about gas prices have also taught us that whereas just 34% of people in early 2000 thought the current rise in gas prices was permanent, 54% of people in 2011 believed the rise to be permanent. Thus through Gallup’s use of trend survey methodology, we’ve learned that Americans seem to feel generally less optimistic about the price of gas these days than they did 10 or so years ago. You can read about these and other findings on Gallup’s gasoline questions at http://www.gallup.com/poll/147632/Gas-Prices.aspx#1 . It should be noted that in a trend survey, the same people are probably not answering the researcher’s questions each year. Because the interest here is in trends, not specific people, as long as the researcher’s sample is representative of whatever population he or she wishes to describe trends for, it isn’t important that the same people participate each time.

Next are panel surveys A type of longitudinal survey in which a researcher surveys the exact same sample several times over a period of time. . Unlike in a trend survey, in a panel survey the same people do participate in the survey each time it is administered. As you might imagine, panel studies can be difficult and costly. Imagine trying to administer a survey to the same 100 people every year for, say, 5 years in a row. Keeping track of where people live, when they move, and when they die takes resources that researchers often don’t have. When they do, however, the results can be quite powerful. The Youth Development Study (YDS), administered from the University of Minnesota, offers an excellent example of a panel study. You can read more about the Youth Development Study at its website: http://www.soc.umn.edu/research/yds . Since 1988, YDS researchers have administered an annual survey to the same 1,000 people. Study participants were in ninth grade when the study began, and they are now in their thirties. Several hundred papers, articles, and books have been written using data from the YDS. One of the major lessons learned from this panel study is that work has a largely positive impact on young people (Mortimer, 2003). Mortimer, J. T. (2003). Working and growing up in America . Cambridge, MA: Harvard University Press. Contrary to popular beliefs about the impact of work on adolescents’ performance in school and transition to adulthood, work in fact increases confidence, enhances academic success, and prepares students for success in their future careers. Without this panel study, we may not be aware of the positive impact that working can have on young people.

Another type of longitudinal survey is a cohort survey A type of longitudinal survey where a researcher’s interest is in a particular group of people who share some common experience or characteristic. . In a cohort survey, a researcher identifies some category of people that are of interest and then regularly surveys people who fall into that category. The same people don’t necessarily participate from year to year, but all participants must meet whatever categorical criteria fulfill the researcher’s primary interest. Common cohorts that may be of interest to researchers include people of particular generations or those who were born around the same time period, graduating classes, people who began work in a given industry at the same time, or perhaps people who have some specific life experience in common. An example of this sort of research can be seen in Christine Percheski’s work (2008) Percheski, C. (2008). Opting out? Cohort differences in professional women’s employment rates from 1960 to 2005. American Sociological Review, 73 , 497–517. on cohort differences in women’s employment. Percheski compared women’s employment rates across seven different generational cohorts, from Progressives born between 1906 and 1915 to Generation Xers born between 1966 and 1975. She found, among other patterns, that professional women’s labor force participation had increased across all cohorts. She also found that professional women with young children from Generation X had higher labor force participation rates than similar women from previous generations, concluding that mothers do not appear to be opting out of the workforce as some journalists have speculated (Belkin, 2003). Belkin, L. (2003, October 26). The opt-out revolution. New York Times , pp. 42–47, 58, 85–86.

All three types of longitudinal surveys share the strength that they permit a researcher to make observations over time. This means that if whatever behavior or other phenomenon the researcher is interested in changes, either because of some world event or because people age, the researcher will be able to capture those changes. Table 8.1 "Types of Longitudinal Surveys" summarizes each of the three types of longitudinal surveys.

Table 8.1 Types of Longitudinal Surveys

Finally, retrospective surveys A type of survey in which participants are asked to report events from the past. are similar to other longitudinal studies in that they deal with changes over time, but like a cross-sectional study, they are administered only once. In a retrospective survey, participants are asked to report events from the past. By having respondents report past behaviors, beliefs, or experiences, researchers are able to gather longitudinal-like data without actually incurring the time or expense of a longitudinal survey. Of course, this benefit must be weighed against the possibility that people’s recollections of their pasts may be faulty. Imagine, for example, that you’re asked in a survey to respond to questions about where, how, and with whom you spent last Valentine’s Day. As last Valentine’s Day can’t have been more than 12 months ago, chances are good that you might be able to respond accurately to any survey questions about it. But now let’s say the research wants to know how last Valentine’s Day compares to previous Valentine’s Days, so he asks you to report on where, how, and with whom you spent the preceding six Valentine’s Days. How likely is it that you will remember? Will your responses be as accurate as they might have been had you been asked the question each year over the past 6 years rather than asked to report on all years today?

In sum, when or with what frequency a survey is administered will determine whether your survey is cross-sectional or longitudinal. While longitudinal surveys are certainly preferable in terms of their ability to track changes over time, the time and cost required to administer a longitudinal survey can be prohibitive. As you may have guessed, the issues of time described here are not necessarily unique to survey research. Other methods of data collection can be cross-sectional or longitudinal—these are really matters of research design. But we’ve placed our discussion of these terms here because they are most commonly used by survey researchers to describe the type of survey administered. Another aspect of survey administration deals with how surveys are administered. We’ll examine that next.

Administration

Surveys vary not just in terms of when they are administered but also in terms of how they are administered. One common way to administer surveys is in the form of self-administered questionnaires A set of written questions that a research participant responds to by filling in answers on her or his own without the assistance of a researcher. . This means that a research participant is given a set of questions, in writing, to which he or she is asked to respond. Self-administered questionnaires can be delivered in hard copy format, typically via mail, or increasingly more commonly, online. We’ll consider both modes of delivery here.

Hard copy self-administered questionnaires may be delivered to participants in person or via snail mail. Perhaps you’ve take a survey that was given to you in person; on many college campuses it is not uncommon for researchers to administer surveys in large social science classes (as you might recall from the discussion in our chapter on sampling). In my own introduction to sociology courses, I’ve welcomed graduate students and professors doing research in areas that are relevant to my students, such as studies of campus life, to administer their surveys to the class. If you are ever asked to complete a survey in a similar setting, it might be interesting to note how your perspective on the survey and its questions could be shaped by the new knowledge you’re gaining about survey research in this chapter.

Researchers may also deliver surveys in person by going door-to-door and either asking people to fill them out right away or making arrangements for the researcher to return to pick up completed surveys. Though the advent of online survey tools has made door-to-door delivery of surveys less common, I still see an occasional survey researcher at my door, especially around election time. This mode of gathering data is apparently still used by political campaign workers, at least in some areas of the country.

If you are not able to visit each member of your sample personally to deliver a survey, you might consider sending your survey through the mail. While this mode of delivery may not be ideal (imagine how much less likely you’d probably be to return a survey that didn’t come with the researcher standing on your doorstep waiting to take it from you), sometimes it is the only available or the most practical option. As I’ve said, this may not be the most ideal way of administering a survey because it can be difficult to convince people to take the time to complete and return your survey.

Often survey researchers who deliver their surveys via snail mail may provide some advance notice to respondents about the survey to get people thinking about and preparing to complete it. They may also follow up with their sample a few weeks after their survey has been sent out. This can be done not only to remind those who have not yet completed the survey to please do so but also to thank those who have already returned the survey. Most survey researchers agree that this sort of follow-up is essential for improving mailed surveys’ return rates (Babbie, 2010). Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth.

In my own study of older workers’ harassment experiences, people in the sample were notified in advance of the survey mailing via an article describing the research in a newsletter they received from the agency with whom I had partnered to conduct the survey. When I mailed the survey, a $1 bill was included with each in order to provide some incentive and an advance token of thanks to participants for returning the surveys. Two months after the initial mailing went out, those who were sent a survey were contacted by phone. While returned surveys did not contain any identifying information about respondents, my research assistants contacted individuals to whom a survey had been mailed to remind them that it was not too late to return their survey and to say thank to those who may have already done so. Four months after the initial mailing went out, everyone on the original mailing list received a letter thanking those who had returned the survey and once again reminding those who had not that it was not too late to do so. The letter included a return postcard for respondents to complete should they wish to receive another copy of the survey. Respondents were also provided a telephone number to call and were provided the option of completing the survey by phone. As you can see, administering a survey by mail typically involves much more than simply arranging a single mailing; participants may be notified in advance of the mailing, they then receive the mailing, and then several follow-up contacts will likely be made after the survey has been mailed.

Earlier I mentioned online delivery as another way to administer a survey. This delivery mechanism is becoming increasingly common, no doubt because it is easy to use, relatively cheap, and may be quicker than knocking on doors or waiting for mailed surveys to be returned. To deliver a survey online, a researcher may subscribe to a service that offers online delivery or use some delivery mechanism that is available for free. SurveyMonkey offers both free and paid online survey services ( http://www.surveymonkey.com ). One advantage to using a service like SurveyMonkey, aside from the advantages of online delivery already mentioned, is that results can be provided to you in formats that are readable by data analysis programs such as SPSS, Systat, and Excel. This saves you, the researcher, the step of having to manually enter data into your analysis program, as you would if you administered your survey in hard copy format.

Many of the suggestions provided for improving the response rate on a hard copy questionnaire apply to online questionnaires as well. One difference of course is that the sort of incentives one can provide in an online format differ from those that can be given in person or sent through the mail. But this doesn’t mean that online survey researchers cannot offer completion incentives to their respondents. I’ve taken a number of online surveys; many of these did not come with an incentive other than the joy of knowing that I’d helped a fellow social scientist do his or her job, but on one I was given a printable $5 coupon to my university’s campus dining services on completion, and another time I was given a coupon code to use for $10 off any order on Amazon.com. I’ve taken other online surveys where on completion I could provide my name and contact information if I wished to be entered into a drawing together with other study participants to win a larger gift, such as a $50 gift card or an iPad.

Sometimes surveys are administered by having a researcher actually pose questions directly to respondents rather than having respondents read the questions on their own. These types of surveys are a form of interviews. We discuss interviews in Chapter 9 "Interviews: Qualitative and Quantitative Approaches" , where we’ll examine interviews of the survey (or quantitative) type and qualitative interviews as well. Interview methodology differs from survey research in that data are collected via a personal interaction. Because asking people questions in person comes with a set of guidelines and concerns that differ from those associated with asking questions on paper or online, we’ll reserve our discussion of those guidelines and concerns for Chapter 9 "Interviews: Qualitative and Quantitative Approaches" .

Whatever delivery mechanism you choose, keep in mind that there are pros and cons to each of the options described here. While online surveys may be faster and cheaper than mailed surveys, can you be certain that every person in your sample will have the necessary computer hardware, software, and Internet access in order to complete your online survey? On the other hand, perhaps mailed surveys are more likely to reach your entire sample but also more likely to be lost and not returned. The choice of which delivery mechanism is best depends on a number of factors including your resources, the resources of your study participants, and the time you have available to distribute surveys and wait for responses. In my own survey of older workers, I would have much preferred to administer my survey online, but because so few people in my sample were likely to have computers, and even fewer would have Internet access, I chose instead to mail paper copies of the survey to respondents’ homes. Understanding the characteristics of your study’s population is key to identifying the appropriate mechanism for delivering your survey.

  • Time is a factor in determining what type of survey researcher administers; cross-sectional surveys are administered at one time, and longitudinal surveys are administered over time.
  • Retrospective surveys offer some of the benefits of longitudinal research but also come with their own drawbacks.
  • Self-administered questionnaires may be delivered in hard copy form to participants in person or via snail mail or online.
  • If the idea of a panel study piqued your interest, check out the Up series of documentary films. While not a survey, the films offer one example of a panel study. Filmmakers began filming the lives of 14 British children in 1964, when the children were 7 years old. They have since caught up with the children every 7 years. In 2012, the eighth installment of the documentary, 56 Up , will come out. Many clips from the series are available on YouTube.
  • For more information about online delivery of surveys, check out SurveyMonkey’s website: http://www.surveymonkey.com .

8.4 Designing Effective Questions and Questionnaires

  • Identify the steps one should take in order to write effective survey questions.
  • Describe some of the ways that survey questions might confuse respondents and how to overcome that possibility.
  • Recite the two response option guidelines when writing closed-ended questions.
  • Define fence-sitting and floating.
  • Describe the steps involved in constructing a well-designed questionnaire.
  • Discuss why pretesting is important.

To this point we’ve considered several general points about surveys including when to use them, some of their pros and cons, and how often and in what ways to administer surveys. In this section we’ll get more specific and take a look at how to pose understandable questions that will yield useable data and how to present those questions on your questionnaire.

Asking Effective Questions

The first thing you need to do in order to write effective survey questions is identify what exactly it is that you wish to know. As silly as it sounds to state what seems so completely obvious, I can’t stress enough how easy it is to forget to include important questions when designing a survey. Let’s say you want to understand how students at your school made the transition from high school to college. Perhaps you wish to identify which students were comparatively more or less successful in this transition and which factors contributed to students’ success or lack thereof. To understand which factors shaped successful students’ transitions to college, you’ll need to include questions in your survey about all the possible factors that could contribute. Consulting the literature on the topic will certainly help, but you should also take the time to do some brainstorming on your own and to talk with others about what they think may be important in the transition to college. Perhaps time or space limitations won’t allow you to include every single item you’ve come up with, so you’ll also need to think about ranking your questions so that you can be sure to include those that you view as most important.

Although I have stressed the importance of including questions on all topics you view as important to your overall research question, you don’t want to take an everything-but-the-kitchen-sink approach by uncritically including every possible question that occurs to you. Doing so puts an unnecessary burden on your survey respondents. Remember that you have asked your respondents to give you their time and attention and to take care in responding to your questions; show them your respect by only asking questions that you view as important.

Once you’ve identified all the topics about which you’d like to ask questions, you’ll need to actually write those questions. Questions should be as clear and to the point as possible. This is not the time to show off your creative writing skills; a survey is a technical instrument and should be written in a way that is as direct and succinct as possible. As I’ve said, your survey respondents have agreed to give their time and attention to your survey. The best way to show your appreciation for their time is to not waste it. Ensuring that your questions are clear and not overly wordy will go a long way toward showing your respondents the gratitude they deserve.

Related to the point about not wasting respondents’ time, make sure that every question you pose will be relevant to every person you ask to complete it. This means two things: first, that respondents have knowledge about whatever topic you are asking them about, and second, that respondents have experience with whatever events, behaviors, or feelings you are asking them to report. You probably wouldn’t want to ask a sample of 18-year-old respondents, for example, how they would have advised President Reagan to proceed when news of the United States’ sale of weapons to Iran broke in the mid-1980s. For one thing, few 18-year-olds are likely to have any clue about how to advise a president (nor does this 30-something-year-old). Furthermore, the 18-year-olds of today were not even alive during Reagan’s presidency, so they have had no experience with the event about which they are being questioned. In our example of the transition to college, heeding the criterion of relevance would mean that respondents must understand what exactly you mean by “transition to college” if you are going to use that phrase in your survey and that respondents must have actually experienced the transition to college themselves.

If you decide that you do wish to pose some questions about matters with which only a portion of respondents will have had experience, it may be appropriate to introduce a filter question A question designed to identify some subset of survey respondents who are then asked additional questions that are not relevant to the entire sample. into your survey. A filter question is designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample. Perhaps in your survey on the transition to college you want to know whether substance use plays any role in students’ transitions. You may ask students how often they drank during their first semester of college. But this assumes that all students drank. Certainly some may have abstained, and it wouldn’t make any sense to ask the nondrinkers how often they drank. Nevertheless, it seems reasonable that drinking frequency may have an impact on someone’s transition to college, so it is probably worth asking this question even if doing so violates the rule of relevance for some respondents. This is just the sort of instance when a filter question would be appropriate. You may pose the question as it is presented in Figure 8.8 "Filter Question" .

Figure 8.8 Filter Question

survey research analysis meaning

There are some ways of asking questions that are bound to confuse a good many survey respondents. Survey researchers should take great care to avoid these kinds of questions. These include questions that pose double negatives, those that use confusing or culturally specific terms, and those that ask more than one question but are posed as a single question. Any time respondents are forced to decipher questions that utilize two forms of negation, confusion is bound to ensue. Taking the previous question about drinking as our example, what if we had instead asked, “Did you not drink during your first semester of college?” A response of no would mean that the respondent did actually drink—he or she did not not drink. This example is obvious, but hopefully it drives home the point to be careful about question wording so that respondents are not asked to decipher double negatives. In general, avoiding negative terms in your question wording will help to increase respondent understanding. Though this is generally true, some researchers argue that negatively worded questions should be integrated with positively worded questions in order to ensure that respondents have actually carefully read each question. See, for example, the following: Vaterlaus, M., & Higgenbotham, B. (2011). Writing survey questions for local program evaluations. Retrieved from http://extension.usu.edu/files/publications/publication/FC_Evaluation_2011-02pr.pdf

You should also avoid using terms or phrases that may be regionally or culturally specific (unless you are absolutely certain all your respondents come from the region or culture whose terms you are using). When I first moved to Maine from Minnesota, I was totally confused every time I heard someone use the word wicked . This term has totally different meanings across different regions of the country. I’d come from an area that understood the term wicked to be associated with evil. In my new home, however, wicked is used simply to put emphasis on whatever it is that you’re talking about. So if this chapter is extremely interesting to you, if you live in Maine you might say that it is “wicked interesting.” If you hate this chapter and you live in Minnesota, perhaps you’d describe the chapter simply as wicked. I once overheard one student tell another that his new girlfriend was “wicked athletic.” At the time I thought this meant he’d found a woman who used her athleticism for evil purposes. I’ve come to understand, however, that this woman is probably just exceptionally athletic. While wicked may not be a term you’re likely to use in a survey, the point is to be thoughtful and cautious about whatever terminology you do use.

Asking multiple questions as though they are a single question can also be terribly confusing for survey respondents. There’s a specific term for this sort of question; it is called a double-barreled question A question that is posed as a single question but in fact asks more than one question. . Using our example of the transition to college, Figure 8.9 "Double-Barreled Question" shows a double-barreled question.

Figure 8.9 Double-Barreled Question

survey research analysis meaning

Do you see what makes the question double-barreled? How would someone respond if they felt their college classes were more demanding but also more boring than their high school classes? Or less demanding but more interesting? Because the question combines “demanding” and “interesting,” there is no way to respond yes to one criterion but no to the other.

Another thing to avoid when constructing survey questions is the problem of social desirability The idea that respondents will try to answer questions in a way that will present them in a favorable light. . We all want to look good, right? And we all probably know the politically correct response to a variety of questions whether we agree with the politically correct response or not. In survey research, social desirability refers to the idea that respondents will try to answer questions in a way that will present them in a favorable light. Perhaps we decide that to understand the transition to college, we need to know whether respondents ever cheated on an exam in high school or college. We all know that cheating on exams is generally frowned upon (at least I hope we all know this). So it may be difficult to get people to admit to cheating on a survey. But if you can guarantee respondents’ confidentiality, or even better, their anonymity, chances are much better that they will be honest about having engaged in this socially undesirable behavior. Another way to avoid problems of social desirability is to try to phrase difficult questions in the most benign way possible. Earl Babbie (2010) Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. offers a useful suggestion for helping you do this—simply imagine how you would feel responding to your survey questions. If you would be uncomfortable, chances are others would as well.

Finally, it is important to get feedback on your survey questions from as many people as possible, especially people who are like those in your sample. Now is not the time to be shy. Ask your friends for help, ask your mentors for feedback, ask your family to take a look at your survey as well. The more feedback you can get on your survey questions, the better the chances that you will come up with a set of questions that are understandable to a wide variety of people and, most importantly, to those in your sample.

In sum, in order to pose effective survey questions, researchers should do the following:

  • Identify what it is they wish to know.
  • Keep questions clear and succinct.
  • Make questions relevant to respondents.
  • Use filter questions when necessary.
  • Avoid questions that are likely to confuse respondents such as those that use double negatives, use culturally specific terms, or pose more than one question in the form of a single question.
  • Imagine how they would feel responding to questions.
  • Get feedback, especially from people who resemble those in the researcher’s sample.

Response Options

While posing clear and understandable questions in your survey is certainly important, so, too, is providing respondents with unambiguous response options The answers that are provided to for each question in a survey. . Response options are the answers that you provide to the people taking your survey. Generally respondents will be asked to choose a single (or best) response to each question you pose, though certainly it makes sense in some cases to instruct respondents to choose multiple response options. One caution to keep in mind when accepting multiple responses to a single question, however, is that doing so may add complexity when it comes to tallying and analyzing your survey results.

Offering response options assumes that your questions will be closed-ended questions A survey question for which the researcher provides respondents with a limited set of clear response options. . In a quantitative written survey, which is the type of survey we’ve been discussing here, chances are good that most if not all your questions will be closed ended. This means that you, the researcher, will provide respondents with a limited set of options for their responses. To write an effective closed-ended question, there are a couple of guidelines worth following. First, be sure that your response options are mutually exclusive . Look back at Figure 8.8 "Filter Question" , which contains questions about how often and how many drinks respondents consumed. Do you notice that there are no overlapping categories in the response options for these questions? This is another one of those points about question construction that seems fairly obvious but that can be easily overlooked. Response options should also be exhaustive . In other words, every possible response should be covered in the set of response options that you provide. For example, note that in question 10a in Figure 8.8 "Filter Question" we have covered all possibilities—those who drank, say, an average of once per month can choose the first response option (“less than one time per week”) while those who drank multiple times a day each day of the week can choose the last response option (“7+”). All the possibilities in between these two extremes are covered by the middle three response options.

Surveys need not be limited to closed-ended questions. Sometimes survey researchers include open-ended questions A survey question for which the researcher does not provide respondents with response options; instead, respondents answer in their own words. in their survey instruments as a way to gather additional details from respondents. An open-ended question does not include response options; instead, respondents are asked to reply to the question in their own way, using their own words. These questions are generally used to find out more about a survey participant’s experiences or feelings about whatever they are being asked to report in the survey. If, for example, a survey includes closed-ended questions asking respondents to report on their involvement in extracurricular activities during college, an open-ended question could ask respondents why they participated in those activities or what they gained from their participation. While responses to such questions may also be captured using a closed-ended format, allowing participants to share some of their responses in their own words can make the experience of completing the survey more satisfying to respondents and can also reveal new motivations or explanations that had not occurred to the researcher.

In Section 8.4.1 "Asking Effective Questions" we discussed double-barreled questions, but response options can also be double barreled, and this should be avoided. Figure 8.10 "Double-Barreled Response Options" is an example of a question that uses double-barreled response options.

Figure 8.10 Double-Barreled Response Options

survey research analysis meaning

Other things to avoid when it comes to response options include fence-sitting and floating. Fence-sitters Respondents who present themselves as neutral when in truth they have an opinion. are respondents who choose neutral response options, even if they have an opinion. This can occur if respondents are given, say, five rank-ordered response options, such as strongly agree, agree, no opinion, disagree, and strongly disagree. Some people will be drawn to respond “no opinion” even if they have an opinion, particularly if their true opinion is the nonsocially desirable opinion. Floaters Respondents who choose a substantive answer to a question when in truth they don’t understand the question or the response options. , on the other hand, are those that choose a substantive answer to a question when really they don’t understand the question or don’t have an opinion. If a respondent is only given four rank-ordered response options, such as strongly agree, agree, disagree, and strongly disagree, those who have no opinion have no choice but to select a response that suggests they have an opinion.

As you can see, floating is the flip side of fence-sitting. Thus the solution to one problem is often the cause of the other. How you decide which approach to take depends on the goals of your research. Sometimes researchers actually want to learn something about people who claim to have no opinion. In this case, allowing for fence-sitting would be necessary. Other times researchers feel confident their respondents will all be familiar with every topic in their survey. In this case, perhaps it is OK to force respondents to choose an opinion. There is no always-correct solution to either problem.

Finally, using a matrix is a nice way of streamlining response options. A matrix Question type that that lists a set of questions for which the answer categories are all the same. is a question type that that lists a set of questions for which the answer categories are all the same. If you have a set of questions for which the response options are the same, it may make sense to create a matrix rather than posing each question and its response options individually. Not only will this save you some space in your survey but it will also help respondents progress through your survey more easily. A sample matrix can be seen in Figure 8.11 "Survey Questions Utilizing Matrix Format" .

Figure 8.11 Survey Questions Utilizing Matrix Format

survey research analysis meaning

Designing Questionnaires

In addition to constructing quality questions and posing clear response options, you’ll also need to think about how to present your written questions and response options to survey respondents. Questions are presented on a questionnaire The document (either hard copy or online) that contains survey questions on which respondents read and mark their responses. , the document (either hard copy or online) that contains all your survey questions that respondents read and mark their responses on. Designing questionnaires takes some thought, and in this section we’ll discuss the sorts of things you should think about as you prepare to present your well-constructed survey questions on a questionnaire.

One of the first things to do once you’ve come up with a set of survey questions you feel confident about is to group those questions thematically. In our example of the transition to college, perhaps we’d have a few questions asking about study habits, others focused on friendships, and still others on exercise and eating habits. Those may be the themes around which we organize our questions. Or perhaps it would make more sense to present any questions we had about precollege life and habits and then present a series of questions about life after beginning college. The point here is to be deliberate about how you present your questions to respondents.

Once you have grouped similar questions together, you’ll need to think about the order in which to present those question groups. Most survey researchers agree that it is best to begin a survey with questions that will want to make respondents continue (Babbie, 2010; Dillman, 2000; Neuman, 2003). Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth; Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method (2nd ed.). New York, NY: Wiley; Neuman, W. L. (2003). Social research methods: Qualitative and quantitative approaches (5th ed.). Boston, MA: Pearson. In other words, don’t bore respondents, but don’t scare them away either. There’s some disagreement over where on a survey to place demographic questions such as those about a person’s age, gender, and race. On the one hand, placing them at the beginning of the questionnaire may lead respondents to think the survey is boring, unimportant, and not something they want to bother completing. On the other hand, if your survey deals with some very sensitive or difficult topic, such as child sexual abuse or other criminal activity, you don’t want to scare respondents away or shock them by beginning with your most intrusive questions.

In truth, the order in which you present questions on a survey is best determined by the unique characteristics of your research—only you, the researcher, hopefully in consultation with people who are willing to provide you with feedback, can determine how best to order your questions. To do so, think about the unique characteristics of your topic, your questions, and most importantly, your sample. Keeping in mind the characteristics and needs of the people you will ask to complete your survey should help guide you as you determine the most appropriate order in which to present your questions.

You’ll also need to consider the time it will take respondents to complete your questionnaire. Surveys vary in length, from just a page or two to a dozen or more pages, which means they also vary in the time it takes to complete them. How long to make your survey depends on several factors. First, what is it that you wish to know? Wanting to understand how grades vary by gender and year in school certainly requires fewer questions than wanting to know how people’s experiences in college are shaped by demographic characteristics, college attended, housing situation, family background, college major, friendship networks, and extracurricular activities. Keep in mind that even if your research question requires a good number of questions be included in your questionnaire, do your best to keep the questionnaire as brief as possible. Any hint that you’ve thrown in a bunch of useless questions just for the sake of throwing them in will turn off respondents and may make them not want to complete your survey.

Second, and perhaps more important, how long are respondents likely to be willing to spend completing your questionnaire? If you are studying college students, asking them to use their precious fun time away from studying to complete your survey may mean they won’t want to spend more than a few minutes on it. But if you have the endorsement of a professor who is willing to allow you to administer your survey in class, students may be willing to give you a little more time (though perhaps the professor will not). The time that survey researchers ask respondents to spend on questionnaires varies greatly. Some advise that surveys should not take longer than about 15 minutes to complete (cited in Babbie 2010), This can be found at http://www.worldopinion.com/the_frame/frame4.html , cited in Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. others suggest that up to 20 minutes is acceptable (Hopper, 2010). Hopper, J. (2010). How long should a survey be? Retrieved from http://www.verstaresearch.com/blog/how-long-should-a-survey-be As with question order, there is no clear-cut, always-correct answer about questionnaire length. The unique characteristics of your study and your sample should be considered in order to determine how long to make your questionnaire.

A good way to estimate the time it will take respondents to complete your questionnaire is through pretesting Getting feedback on a questionnaire so that it can be improved before it is administered. . Pretesting allows you to get feedback on your questionnaire so you can improve it before you actually administer it. Pretesting can be quite expensive and time consuming if you wish to test your questionnaire on a large sample of people who very much resemble the sample to whom you will eventually administer the finalized version of your questionnaire. But you can learn a lot and make great improvements to your questionnaire simply by pretesting with a small number of people to whom you have easy access (perhaps you have a few friends who owe you a favor). By pretesting your questionnaire you can find out how understandable your questions are, get feedback on question wording and order, find out whether any of your questions are exceptionally boring or offensive, and learn whether there are places where you should have included filter questions, to name just a few of the benefits of pretesting. You can also time pretesters as they take your survey. Ask them to complete the survey as though they were actually members of your sample. This will give you a good idea about what sort of time estimate to provide respondents when it comes time to actually administer your survey, and about whether you have some wiggle room to add additional items or need to cut a few items.

Perhaps this goes without saying, but your questionnaire should also be attractive. A messy presentation style can confuse respondents or, at the very least, annoy them. Be brief, to the point, and as clear as possible. Avoid cramming too much into a single page, make your font size readable (at least 12 point), leave a reasonable amount of space between items, and make sure all instructions are exceptionally clear. Think about books, documents, articles, or web pages that you have read yourself—which were relatively easy to read and easy on the eyes and why? Try to mimic those features in the presentation of your survey questions.

  • Brainstorming and consulting the literature are two important early steps to take when preparing to write effective survey questions.
  • Make sure that your survey questions will be relevant to all respondents and that you use filter questions when necessary.
  • Getting feedback on your survey questions is a crucial step in the process of designing a survey.
  • When it comes to creating response options, the solution to the problem of fence-sitting might cause floating, whereas the solution to the problem of floating might cause fence sitting.
  • Pretesting is an important step for improving one’s survey before actually administering it.
  • Do a little Internet research to find out what a Likert scale is and when you may use one.
  • Write a closed-ended question that follows the guidelines for good survey question construction. Have a peer in the class check your work (you can do the same for him or her!).

8.5 Analysis of Survey Data

  • Define response rate, and discuss some of the current thinking about response rates.
  • Describe what a codebook is and what purpose it serves.
  • Define univariate, bivariate, and multivariate analysis.
  • Describe each of the measures of central tendency.
  • Describe what a contingency table displays.

This text is primarily focused on designing research, collecting data, and becoming a knowledgeable and responsible consumer of research. We won’t spend as much time on data analysis, or what to do with our data once we’ve designed a study and collected it, but I will spend some time in each of our data-collection chapters describing some important basics of data analysis that are unique to each method. Entire textbooks could be (and have been) written entirely on data analysis. In fact, if you’ve ever taken a statistics class, you already know much about how to analyze quantitative survey data. Here we’ll go over a few basics that can get you started as you begin to think about turning all those completed questionnaires into findings that you can share.

From Completed Questionnaires to Analyzable Data

It can be very exciting to receive those first few completed surveys back from respondents. Hopefully you’ll even get more than a few back, and once you have a handful of completed questionnaires, your feelings may go from initial euphoria to dread. Data are fun and can also be overwhelming. The goal with data analysis is to be able to condense large amounts of information into usable and understandable chunks. Here we’ll describe just how that process works for survey researchers.

As mentioned, the hope is that you will receive a good portion of the questionnaires you distributed back in a completed and readable format. The number of completed questionnaires you receive divided by the number of questionnaires you distributed is your response rate The percentage of completed questionnaires returned; determined by dividing the number of completed questionnaires by the number originally distributed. . Let’s say your sample included 100 people and you sent questionnaires to each of those people. It would be wonderful if all 100 returned completed questionnaires, but the chances of that happening are about zero. If you’re lucky, perhaps 75 or so will return completed questionnaires. In this case, your response rate would be 75% (75 divided by 100). That’s pretty darn good. Though response rates vary, and researchers don’t always agree about what makes a good response rate, having three-quarters of your surveys returned would be considered good, even excellent, by most survey researchers. There has been lots of research done on how to improve a survey’s response rate. We covered some of these previously, but suggestions include personalizing questionnaires by, for example, addressing them to specific respondents rather than to some generic recipient such as “madam” or “sir”; enhancing the questionnaire’s credibility by providing details about the study, contact information for the researcher, and perhaps partnering with agencies likely to be respected by respondents such as universities, hospitals, or other relevant organizations; sending out prequestionnaire notices and postquestionnaire reminders; and including some token of appreciation with mailed questionnaires even if small, such as a $1 bill.

The major concern with response rates is that a low rate of response may introduce nonresponse bias The possible result of having too few sample members return completed questionnaires; occurs when respondents differ in important ways from nonrespondents. into a study’s findings. What if only those who have strong opinions about your study topic return their questionnaires? If that is the case, we may well find that our findings don’t at all represent how things really are or, at the very least, we are limited in the claims we can make about patterns found in our data. While high return rates are certainly ideal, a recent body of research shows that concern over response rates may be overblown (Langer, 2003). Langer, G. (2003). About response rates: Some unresolved questions. Public Perspective , May/June, 16–18. Retrieved from http://www.aapor.org/Content/aapor/Resources/PollampSurveyFAQ1/DoResponseRatesMatter/Response_Rates_-_Langer.pdf Several studies have shown that low response rates did not make much difference in findings or in sample representativeness (Curtin, Presser, & Singer, 2000; Keeter, Kennedy, Dimock, Best, & Craighill, 2006; Merkle & Edelman, 2002). Curtin, R., Presser, S., & Singer, E. (2000). The effects of response rate changes on the index of consumer sentiment. Public Opinion Quarterly, 64 , 413–428; Keeter, S., Kennedy, C., Dimock, M., Best, J., & Craighill, P. (2006). Gauging the impact of growing nonresponse on estimates from a national RDD telephone survey. Public Opinion Quarterly, 70 , 759–779; Merkle, D. M., & Edelman, M. (2002). Nonresponse in exit polls: A comprehensive analysis. In M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey nonresponse (pp. 243–258). New York, NY: John Wiley and Sons. For now, the jury may still be out on what makes an ideal response rate and on whether, or to what extent, researchers should be concerned about response rates. Nevertheless, certainly no harm can come from aiming for as high a response rate as possible.

Whatever your survey’s response rate, the major concern of survey researchers once they have their nice, big stack of completed questionnaires is condensing their data into manageable, and analyzable, bits. One major advantage of quantitative methods such as survey research, as you may recall from Chapter 1 "Introduction" , is that they enable researchers to describe large amounts of data because they can be represented by and condensed into numbers. In order to condense your completed surveys into analyzable numbers, you’ll first need to create a codebook A document that outlines how a survey researcher has translated her or his data from words into numbers. . A codebook is a document that outlines how a survey researcher has translated her or his data from words into numbers. An excerpt from the codebook I developed from my survey of older workers can be seen in Table 8.2 "Codebook Excerpt From Survey of Older Workers" . The coded responses you see can be seen in their original survey format in Chapter 6 "Defining and Measuring Concepts" , Figure 6.12 "Example of an Index Measuring Financial Security" . As you’ll see in the table, in addition to converting response options into numerical values, a short variable name is given to each question. This shortened name comes in handy when entering data into a computer program for analysis.

Table 8.2 Codebook Excerpt From Survey of Older Workers

If you’ve administered your questionnaire the old fashioned way, via snail mail, the next task after creating your codebook is data entry. If you’ve utilized an online tool such as SurveyMonkey to administer your survey, here’s some good news—most online survey tools come with the capability of importing survey results directly into a data analysis program. Trust me—this is indeed most excellent news. (If you don’t believe me, I highly recommend administering hard copies of your questionnaire next time around. You’ll surely then appreciate the wonders of online survey administration.)

For those who will be conducting manual data entry, there probably isn’t much I can say about this task that will make you want to perform it other than pointing out the reward of having a database of your very own analyzable data. We won’t get into too many of the details of data entry, but I will mention a few programs that survey researchers may use to analyze data once it has been entered. The first is SPSS, or the Statistical Package for the Social Sciences ( http://www.spss.com ). SPSS is a statistical analysis computer program designed to analyze just the sort of data quantitative survey researchers collect. It can perform everything from very basic descriptive statistical analysis to more complex inferential statistical analysis. SPSS is touted by many for being highly accessible and relatively easy to navigate (with practice). Other programs that are known for their accessibility include MicroCase ( http://www.microcase.com/index.html ), which includes many of the same features as SPSS, and Excel ( http://office.microsoft.com/en-us/excel-help/about-statistical-analysis-tools-HP005203873.aspx ), which is far less sophisticated in its statistical capabilities but is relatively easy to use and suits some researchers’ purposes just fine. Check out the web pages for each, which I’ve provided links to in the chapter’s endnotes, for more information about what each package can do.

Identifying Patterns

Data analysis is about identifying, describing, and explaining patterns. Univariate analysis Analysis of a single variable. is the most basic form of analysis that quantitative researchers conduct. In this form, researchers describe patterns across just one variable. Univariate analysis includes frequency distributions and measures of central tendency. A frequency distribution is a way of summarizing the distribution of responses on a single survey question. Let’s look at the frequency distribution for just one variable from my older worker survey. We’ll analyze the item mentioned first in the codebook excerpt given earlier, on respondents’ self-reported financial security.

Table 8.3 Frequency Distribution of Older Workers’ Financial Security

As you can see in the frequency distribution on self-reported financial security, more respondents reported feeling “moderately secure” than any other response category. We also learn from this single frequency distribution that fewer than 10% of respondents reported being in one of the two most secure categories.

Another form of univariate analysis that survey researchers can conduct on single variables is measures of central tendency. Measures of central tendency tell us what the most common, or average, response is on a question. Measures of central tendency can be taken for any level variable of those we learned about in Chapter 6 "Defining and Measuring Concepts" , from nominal to ratio. There are three kinds of measures of central tendency: modes, medians, and means. Mode A measure of central tendency that identifies the most common response given to a question. refers to the most common response given to a question. Modes are most appropriate for nominal-level variables. A median A measure of central tendency that identifies the middle point in a distribution of responses. is the middle point in a distribution of responses. Median is the appropriate measure of central tendency for ordinal-level variables. Finally, the measure of central tendency used for interval- and ratio-level variables is the mean. To obtain a mean A measure of central tendency that identifies the average response to an interval- or ratio-level question; found by adding the value of all responses on a single variable and dividing by the total number of responses to that question. , one must add the value of all responses on a given variable and then divide that number of the total number of responses.

In the previous example of older workers’ self-reported levels of financial security, the appropriate measure of central tendency would be the median, as this is an ordinal-level variable. If we were to list all responses to the financial security question in order and then choose the middle point in that list, we’d have our median. In Figure 8.12 "Distribution of Responses and Median Value on Workers’ Financial Security" , the value of each response to the financial security question is noted, and the middle point within that range of responses is highlighted. To find the middle point, we simply divide the number of valid cases by two. The number of valid cases, 180, divided by 2 is 90, so we’re looking for the 90th value on our distribution to discover the median. As you’ll see in Figure 8.12 "Distribution of Responses and Median Value on Workers’ Financial Security" , that value is 3, thus the median on our financial security question is 3, or “moderately secure.”

Figure 8.12 Distribution of Responses and Median Value on Workers’ Financial Security

survey research analysis meaning

As you can see, we can learn a lot about our respondents simply by conducting univariate analysis of measures on our survey. We can learn even more, of course, when we begin to examine relationships among variables. Either we can analyze the relationships between two variables, called bivariate analysis Analysis of the relationships between two variables. , or we can examine relationships among more than two variables. This latter type of analysis is known as multivariate analysis Analysis of the relationships among multiple variables. .

Bivariate analysis allows us to assess covariation Occurs when changes in one variable happen together with changes in another. among two variables. This means we can find out whether changes in one variable occur together with changes in another. If two variables do not covary, they are said to have independence Occurs when there is no relationship between the variables in question. . This means simply that there is no relationship between the two variables in question. To learn whether a relationship exists between two variables, a researcher may cross-tabulate The process for creating a contingency table. the two variables and present their relationship in a contingency table. A contingency table Displays how variation on one variable may be contingent on variation on another. shows how variation on one variable may be contingent on variation on the other. Let’s take a look at a contingency table. In Table 8.4 "Financial Security Among Men and Women Workers Age 62 and Up" , I have cross-tabulated two questions from my older worker survey: respondents’ reported gender and their self-rated financial security.

Table 8.4 Financial Security Among Men and Women Workers Age 62 and Up

You’ll see in Table 8.4 "Financial Security Among Men and Women Workers Age 62 and Up" that I collapsed a couple of the financial security response categories (recall that there were five categories presented in Table 8.3 "Frequency Distribution of Older Workers’ Financial Security" ; here there are just three). Researchers sometimes collapse response categories on items such as this in order to make it easier to read results in a table. You’ll also see that I placed the variable “gender” in the table’s columns and “financial security” in its rows. Typically, values that are contingent on other values are placed in rows (a.k.a. dependent variables), while independent variables are placed in columns. This makes comparing across categories of our independent variable pretty simple. Reading across the top row of our table, we can see that around 44% of men in the sample reported that they are not financially secure while almost 52% of women reported the same. In other words, more women than men reported that they are not financially secure. You’ll also see in the table that I reported the total number of respondents for each category of the independent variable in the table’s bottom row. This is also standard practice in a bivariate table, as is including a table heading describing what is presented in the table.

Researchers interested in simultaneously analyzing relationships among more than two variables conduct multivariate analysis. If I hypothesized that financial security declines for women as they age but increases for men as they age, I might consider adding age to the preceding analysis. To do so would require multivariate, rather than bivariate, analysis. We won’t go into detail here about how to conduct multivariate analysis of quantitative survey items here, but we will return to multivariate analysis in Chapter 14 "Reading and Understanding Social Research" , where we’ll discuss strategies for reading and understanding tables that present multivariate statistics. If you are interested in learning more about the analysis of quantitative survey data, I recommend checking out your campus’s offerings in statistics classes. The quantitative data analysis skills you will gain in a statistics class could serve you quite well should you find yourself seeking employment one day.

  • While survey researchers should always aim to obtain the highest response rate possible, some recent research argues that high return rates on surveys may be less important than we once thought.
  • There are several computer programs designed to assist survey researchers with analyzing their data include SPSS, MicroCase, and Excel.
  • Data analysis is about identifying, describing, and explaining patterns.
  • Contingency tables show how, or whether, one variable covaries with another.
  • Codebooks can range from relatively simple to quite complex. For an excellent example of a more complex codebook, check out the coding for the General Social Survey (GSS): http://publicdata.norc.org:41000/gss/documents//BOOK/GSS_Codebook.pdf .
  • The GSS allows researchers to cross-tabulate GSS variables directly from its website. Interested? Check out http://www.norc.uchicago.edu/GSS+Website/Data+Analysis .
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

survey research analysis meaning

Home Market Research

Data Analysis in Research: Types & Methods

data-analysis-in-research

Content Index

Why analyze data in research?

Types of data in research, finding patterns in the qualitative data, methods used for data analysis in qualitative research, preparing data for analysis, methods used for data analysis in quantitative research, considerations in research data analysis, what is data analysis in research.

Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. 

Three essential things occur during the data analysis process — the first is data organization . Summarization and categorization together contribute to becoming the second known method used for data reduction. It helps find patterns and themes in the data for easy identification and linking. The third and last way is data analysis – researchers do it in both top-down and bottom-up fashion.

LEARN ABOUT: Research Process Steps

On the other hand, Marshall and Rossman describe data analysis as a messy, ambiguous, and time-consuming but creative and fascinating process through which a mass of collected data is brought to order, structure and meaning.

We can say that “the data analysis and data interpretation is a process representing the application of deductive and inductive logic to the research and data analysis.”

Researchers rely heavily on data as they have a story to tell or research problems to solve. It starts with a question, and data is nothing but an answer to that question. But, what if there is no question to ask? Well! It is possible to explore data even without a problem – we call it ‘Data Mining’, which often reveals some interesting patterns within the data that are worth exploring.

Irrelevant to the type of data researchers explore, their mission and audiences’ vision guide them to find the patterns to shape the story they want to tell. One of the essential things expected from researchers while analyzing data is to stay open and remain unbiased toward unexpected patterns, expressions, and results. Remember, sometimes, data analysis tells the most unforeseen yet exciting stories that were not expected when initiating data analysis. Therefore, rely on the data you have at hand and enjoy the journey of exploratory research. 

Create a Free Account

Every kind of data has a rare quality of describing things after assigning a specific value to it. For analysis, you need to organize these values, processed and presented in a given context, to make it useful. Data can be in different forms; here are the primary data types.

  • Qualitative data: When the data presented has words and descriptions, then we call it qualitative data . Although you can observe this data, it is subjective and harder to analyze data in research, especially for comparison. Example: Quality data represents everything describing taste, experience, texture, or an opinion that is considered quality data. This type of data is usually collected through focus groups, personal qualitative interviews , qualitative observation or using open-ended questions in surveys.
  • Quantitative data: Any data expressed in numbers of numerical figures are called quantitative data . This type of data can be distinguished into categories, grouped, measured, calculated, or ranked. Example: questions such as age, rank, cost, length, weight, scores, etc. everything comes under this type of data. You can present such data in graphical format, charts, or apply statistical analysis methods to this data. The (Outcomes Measurement Systems) OMS questionnaires in surveys are a significant source of collecting numeric data.
  • Categorical data: It is data presented in groups. However, an item included in the categorical data cannot belong to more than one group. Example: A person responding to a survey by telling his living style, marital status, smoking habit, or drinking habit comes under the categorical data. A chi-square test is a standard method used to analyze this data.

Learn More : Examples of Qualitative Data in Education

Data analysis in qualitative research

Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process. Hence it is typically used for exploratory research and data analysis .

Although there are several ways to find patterns in the textual information, a word-based method is the most relied and widely used global technique for research and data analysis. Notably, the data analysis process in qualitative research is manual. Here the researchers usually read the available data and find repetitive or commonly used words. 

For example, while studying data collected from African countries to understand the most pressing issues people face, researchers might find  “food”  and  “hunger” are the most commonly used words and will highlight them for further analysis.

LEARN ABOUT: Level of Analysis

The keyword context is another widely used word-based technique. In this method, the researcher tries to understand the concept by analyzing the context in which the participants use a particular keyword.  

For example , researchers conducting research and data analysis for studying the concept of ‘diabetes’ amongst respondents might analyze the context of when and how the respondent has used or referred to the word ‘diabetes.’

The scrutiny-based technique is also one of the highly recommended  text analysis  methods used to identify a quality data pattern. Compare and contrast is the widely used method under this technique to differentiate how a specific text is similar or different from each other. 

For example: To find out the “importance of resident doctor in a company,” the collected data is divided into people who think it is necessary to hire a resident doctor and those who think it is unnecessary. Compare and contrast is the best method that can be used to analyze the polls having single-answer questions types .

Metaphors can be used to reduce the data pile and find patterns in it so that it becomes easier to connect data with theory.

Variable Partitioning is another technique used to split variables so that researchers can find more coherent descriptions and explanations from the enormous data.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

There are several techniques to analyze the data in qualitative research, but here are some commonly used methods,

  • Content Analysis:  It is widely accepted and the most frequently employed technique for data analysis in research methodology. It can be used to analyze the documented information from text, images, and sometimes from the physical items. It depends on the research questions to predict when and where to use this method.
  • Narrative Analysis: This method is used to analyze content gathered from various sources such as personal interviews, field observation, and  surveys . The majority of times, stories, or opinions shared by people are focused on finding answers to the research questions.
  • Discourse Analysis:  Similar to narrative analysis, discourse analysis is used to analyze the interactions with people. Nevertheless, this particular method considers the social context under which or within which the communication between the researcher and respondent takes place. In addition to that, discourse analysis also focuses on the lifestyle and day-to-day environment while deriving any conclusion.
  • Grounded Theory:  When you want to explain why a particular phenomenon happened, then using grounded theory for analyzing quality data is the best resort. Grounded theory is applied to study data about the host of similar cases occurring in different settings. When researchers are using this method, they might alter explanations or produce new ones until they arrive at some conclusion.

LEARN ABOUT: 12 Best Tools for Researchers

Data analysis in quantitative research

The first stage in research and data analysis is to make it for the analysis so that the nominal data can be converted into something meaningful. Data preparation consists of the below phases.

Phase I: Data Validation

Data validation is done to understand if the collected data sample is per the pre-set standards, or it is a biased data sample again divided into four different stages

  • Fraud: To ensure an actual human being records each response to the survey or the questionnaire
  • Screening: To make sure each participant or respondent is selected or chosen in compliance with the research criteria
  • Procedure: To ensure ethical standards were maintained while collecting the data sample
  • Completeness: To ensure that the respondent has answered all the questions in an online survey. Else, the interviewer had asked all the questions devised in the questionnaire.

Phase II: Data Editing

More often, an extensive research data sample comes loaded with errors. Respondents sometimes fill in some fields incorrectly or sometimes skip them accidentally. Data editing is a process wherein the researchers have to confirm that the provided data is free of such errors. They need to conduct necessary checks and outlier checks to edit the raw edit and make it ready for analysis.

Phase III: Data Coding

Out of all three, this is the most critical phase of data preparation associated with grouping and assigning values to the survey responses . If a survey is completed with a 1000 sample size, the researcher will create an age bracket to distinguish the respondents based on their age. Thus, it becomes easier to analyze small data buckets rather than deal with the massive data pile.

LEARN ABOUT: Steps in Qualitative Research

After the data is prepared for analysis, researchers are open to using different research and data analysis methods to derive meaningful insights. For sure, statistical analysis plans are the most favored to analyze numerical data. In statistical analysis, distinguishing between categorical data and numerical data is essential, as categorical data involves distinct categories or labels, while numerical data consists of measurable quantities. The method is again classified into two groups. First, ‘Descriptive Statistics’ used to describe data. Second, ‘Inferential statistics’ that helps in comparing the data .

Descriptive statistics

This method is used to describe the basic features of versatile types of data in research. It presents the data in such a meaningful way that pattern in the data starts making sense. Nevertheless, the descriptive analysis does not go beyond making conclusions. The conclusions are again based on the hypothesis researchers have formulated so far. Here are a few major types of descriptive analysis methods.

Measures of Frequency

  • Count, Percent, Frequency
  • It is used to denote home often a particular event occurs.
  • Researchers use it when they want to showcase how often a response is given.

Measures of Central Tendency

  • Mean, Median, Mode
  • The method is widely used to demonstrate distribution by various points.
  • Researchers use this method when they want to showcase the most commonly or averagely indicated response.

Measures of Dispersion or Variation

  • Range, Variance, Standard deviation
  • Here the field equals high/low points.
  • Variance standard deviation = difference between the observed score and mean
  • It is used to identify the spread of scores by stating intervals.
  • Researchers use this method to showcase data spread out. It helps them identify the depth until which the data is spread out that it directly affects the mean.

Measures of Position

  • Percentile ranks, Quartile ranks
  • It relies on standardized scores helping researchers to identify the relationship between different scores.
  • It is often used when researchers want to compare scores with the average count.

For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell. For example, the mean is the best way to demonstrate the students’ average scores in schools. It is better to rely on the descriptive statistics when the researchers intend to keep the research or outcome limited to the provided  sample  without generalizing it. For example, when you want to compare average voting done in two different cities, differential statistics are enough.

Descriptive analysis is also called a ‘univariate analysis’ since it is commonly used to analyze a single variable.

Inferential statistics

Inferential statistics are used to make predictions about a larger population after research and data analysis of the representing population’s collected sample. For example, you can ask some odd 100 audiences at a movie theater if they like the movie they are watching. Researchers then use inferential statistics on the collected  sample  to reason that about 80-90% of people like the movie. 

Here are two significant areas of inferential statistics.

  • Estimating parameters: It takes statistics from the sample research data and demonstrates something about the population parameter.
  • Hypothesis test: I t’s about sampling research data to answer the survey research questions. For example, researchers might be interested to understand if the new shade of lipstick recently launched is good or not, or if the multivitamin capsules help children to perform better at games.

These are sophisticated analysis methods used to showcase the relationship between different variables instead of describing a single variable. It is often used when researchers want something beyond absolute numbers to understand the relationship between variables.

Here are some of the commonly used methods for data analysis in research.

  • Correlation: When researchers are not conducting experimental research or quasi-experimental research wherein the researchers are interested to understand the relationship between two or more variables, they opt for correlational research methods.
  • Cross-tabulation: Also called contingency tables,  cross-tabulation  is used to analyze the relationship between multiple variables.  Suppose provided data has age and gender categories presented in rows and columns. A two-dimensional cross-tabulation helps for seamless data analysis and research by showing the number of males and females in each age category.
  • Regression analysis: For understanding the strong relationship between two variables, researchers do not look beyond the primary and commonly used regression analysis method, which is also a type of predictive analysis used. In this method, you have an essential factor called the dependent variable. You also have multiple independent variables in regression analysis. You undertake efforts to find out the impact of independent variables on the dependent variable. The values of both independent and dependent variables are assumed as being ascertained in an error-free random manner.
  • Frequency tables: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Analysis of variance: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Researchers must have the necessary research skills to analyze and manipulation the data , Getting trained to demonstrate a high standard of research practice. Ideally, researchers must possess more than a basic understanding of the rationale of selecting one statistical method over the other to obtain better data insights.
  • Usually, research and data analytics projects differ by scientific discipline; therefore, getting statistical advice at the beginning of analysis helps design a survey questionnaire, select data collection methods , and choose samples.

LEARN ABOUT: Best Data Collection Tools

  • The primary aim of data research and analysis is to derive ultimate insights that are unbiased. Any mistake in or keeping a biased mind to collect data, selecting an analysis method, or choosing  audience  sample il to draw a biased inference.
  • Irrelevant to the sophistication used in research data and analysis is enough to rectify the poorly defined objective outcome measurements. It does not matter if the design is at fault or intentions are not clear, but lack of clarity might mislead readers, so avoid the practice.
  • The motive behind data analysis in research is to present accurate and reliable data. As far as possible, avoid statistical errors, and find a way to deal with everyday challenges like outliers, missing data, data altering, data mining , or developing graphical representation.

LEARN MORE: Descriptive Research vs Correlational Research The sheer amount of data generated daily is frightening. Especially when data analysis has taken center stage. in 2018. In last year, the total data supply amounted to 2.8 trillion gigabytes. Hence, it is clear that the enterprises willing to survive in the hypercompetitive world must possess an excellent capability to analyze complex research data, derive actionable insights, and adapt to the new market needs.

LEARN ABOUT: Average Order Value

QuestionPro is an online survey platform that empowers organizations in data analysis and research and provides them a medium to collect data by creating appealing surveys.

MORE LIKE THIS

survey research analysis meaning

Why Multilingual 360 Feedback Surveys Provide Better Insights

Jun 3, 2024

Raked Weighting

Raked Weighting: A Key Tool for Accurate Survey Results

May 31, 2024

Data trends

Top 8 Data Trends to Understand the Future of Data

May 30, 2024

interactive presentation software

Top 12 Interactive Presentation Software to Engage Your User

May 29, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Google Forms: How to use this free Google Workspace tool to create surveys, quizzes, and questionnaires

  • Google Forms is a free online software for creating surveys and questionnaires.
  • You need a Google account to create a Google Form, but anyone can fill out a Google Form.
  • You can personalize your Google Form with question types, header images, and color themes.

Insider Today

Google Forms is free online software that allows you to create surveys, quizzes, and more. 

Google Forms is part of Google's web-based apps suite, which also includes Google Docs, Google Sheets, Google Slides , and more. It's a versatile tool that can be used for various applications, from gathering RSVPs for an event to creating a pop quiz. You'll need a Google account to create a Google Form, but you can adjust the form settings so that recipients can fill it out regardless of whether they have a Google account.

Currently, Google Forms does not offer a native mobile app but you can access it on your desktop computer.

Here's everything else you need to know about Google Forms.

How can I create a Google Form?

Google Forms differentiates itself from similar online software through its library of customization options. When creating your new form, you'll have the ability to select from a series of templates or design your very own. 

If you choose to make a new template, consider adding your logo and photos, and watch Google generate a custom color set to match.

Here's how to do it: 

  • Go to docs.google.com/forms
  • Click Blank form to create a new form, or choose a pre-made template to kick-start the process. Google has a number of helpful template options, including feedback forms, order forms, job applications, worksheets, registration forms, and even "Find a Time" forms if you're trying to schedule an event or Google Meet conference call.

With the Q&A format at the heart of Google Forms, the Workspace tool offers various question and response options, including multiple-choice, dropdown, linear scale, and multiple-choice and tick-box grid.

With each new question, you can integrate multimedia, such as images or YouTube videos, or add text descriptions that offer hints or expound on the question.

Related stories

If you're a Google Classroom user, you can use Google Forms to create quiz assignments for your students.

How can I customize or organize my Google Form?

In the Settings tab, you can customize options in the Responses dropdown, like Collect email addresses .

You can choose to require respondents to enter an email address to submit the Form by selecting Responder input or force respondents to sign into their Google accounts to respond by selecting Verified . You can also let respondents submit anonymously by choosing Do not collect .

In the Presentation dropdown below, you can click boxes to include a progress bar, shuffle the order of the questions, and set a custom confirmation message that respondents will receive upon submitting the Form.

In the Quizzes dropdown, you can turn your form into a quiz.

Organizational features let you determine the order of your queries through a drag-and-drop tool or randomize the answer order for specific questions through the form's settings.

Another way to organize your form is through Google Forms' section tool. These can be helpful for longer surveys, as they break questions up into manageable chunks. To create a section, click the Add section icon (two vertically stacked rectangles) on the right toolbar. It's located on the same toolbar as the "+" for adding a question.

Once you're ready to share your Google Form, clicking the Send button at the top right of the screen will let you send the Form via email, copy a link, or copy an embedded HTML code to add the form to your website or blog.

How to navigate Google Forms responses

Once your Google Form is published and you've shared it using either the multiple public and private share options, it will automatically collect responses as people fill out and submit their responses. Answers gathered by a Google Form are only viewable to you, the creator, and any collaborators you add.

To view responses for your Google Form, open your Google Form and navigate to the Responses tab. Here, you will see a summary of the responses collected. Click the green Google Sheets icon to create a spreadsheet that displays all of the information gathered from the Form, which will automatically update as people submit your Google Form.

In the Responses tab, you can also elect to get email notifications for new responses, select a response destination (either a new or existing spreadsheet), download, or print the answers by clicking the three dots next to the Google Sheets icon. There's also an option to delete all replies, which can be useful in deleting responses collected when testing your sheet.

On February 28, Axel Springer, Business Insider's parent company, joined 31 other media groups and filed a $2.3 billion suit against Google in Dutch court, alleging losses suffered due to the company's advertising practices.

survey research analysis meaning

  • Main content

America’s best decade, according to data

One simple variable, more than anything, determines when you think the nation peaked.

survey research analysis meaning

How do you define the good old days?

Department of Data

survey research analysis meaning

The plucky poll slingers at YouGov, who are consistently willing to use their elite-tier survey skills in service of measuring the unmeasurable, asked 2,000 adults which decade had the best and worst music, movies, economy and so forth, across 20 measures . But when we charted them, no consistent pattern emerged.

We did spot some peaks: When asked which decade had the most moral society, the happiest families or the closest-knit communities, White people and Republicans were about twice as likely as Black people and Democrats to point to the 1950s. The difference probably depends on whether you remember that particular decade for “Leave it to Beaver,” drive-in theaters and “12 Angry Men” — or the Red Scare, the murder of Emmett Till and massive resistance to school integration.

“This was a time when Repubs were pretty much running the show and had reason to be happy,” pioneering nostalgia researcher Morris Holbrook told us via email. “Apparently, you could argue that nostalgia is colored by political preferences. Surprise, surprise.”

And he’s right! But any political, racial or gender divides were dwarfed by what happened when we charted the data by generation. Age, more than anything, determines when you think America peaked.

So, we looked at the data another way, measuring the gap between each person’s birth year and their ideal decade. The consistency of the resulting pattern delighted us: It shows that Americans feel nostalgia not for a specific era, but for a specific age.

The good old days when America was “great” aren’t the 1950s. They’re whatever decade you were 11, your parents knew the correct answer to any question, and you’d never heard of war crimes tribunals, microplastics or improvised explosive devices. Or when you were 15 and athletes and musicians still played hard and hadn’t sold out.

Not every flavor of nostalgia peaks as sharply as music does. But by distilling them to the most popular age for each question, we can chart a simple life cycle of nostalgia.

The closest-knit communities were those in our childhood, ages 4 to 7. The happiest families, most moral society and most reliable news reporting came in our early formative years — ages 8 through 11. The best economy, as well as the best radio, television and movies, happened in our early teens — ages 12 through 15.

GET CAUGHT UP Summarized stories to quickly stay informed

City sued for paying hundreds of Black residents $25,000 in reparations

City sued for paying hundreds of Black residents $25,000 in reparations

Russia is trying to disrupt 2024 Paris Olympics, Microsoft says

Russia is trying to disrupt 2024 Paris Olympics, Microsoft says

Louisiana bill proposes surgical castration for child sex offenders

Louisiana bill proposes surgical castration for child sex offenders

Kilauea volcano, one of the world’s most active, erupts in Hawaii

Kilauea volcano, one of the world’s most active, erupts in Hawaii

27 little travel luxuries to make any trip feel first class

27 little travel luxuries to make any trip feel first class

Slightly spendier activities such as fashion, music and sporting events peaked in our late teens — ages 16 through 19 — matching research from the University of South Australia’s Ehrenberg-Bass Institute, which shows music nostalgia centers on age 17 .

YouGov didn’t just ask about the best music and the best economy. The pollsters also asked about the worst music and the worst economy. But almost without exception, if you ask an American when times were worst, the most common response will be “right now!”

This holds true even when “now” is clearly not the right answer. For example, when we ask which decade had the worst economy, the most common answer is today. The Great Depression — when, for much of a decade, unemployment exceeded the what we saw in the worst month of pandemic shutdowns — comes in a grudging second.

To be sure, other forces seem to be at work. Democrats actually thought the current economy wasn’t as bad as the Great Depression. Republicans disagreed. In fact, measure after measure, Republicans were more negative about the current decade than any other group — even low-income folks in objectively difficult situations.

So, we called the brilliant Joanne Hsu, director of the University of Michigan’s Surveys of Consumers who regularly wrestles with partisan bias in polling.

Hsu said that yes, she sees a huge partisan split in the economy, and yes, Republicans are far more negative than Democrats. But it hasn’t always been that way.

“People whose party is in the White House always have more favorable sentiment than people who don’t,” she told us. “And this has widened over time.”

In a recent analysis , Hsu — who previously worked on some of our favorite surveys at the Federal Reserve — found that while partisanship drove wider gaps in economic expectations than did income, age or education even in the George W. Bush and Barack Obama years, they more than doubled under Donald Trump as Republicans’ optimism soared and Democrats’ hopes fell.

Our attitudes reversed almost the instant President Biden took office, but the gap remains nearly as wide. That is to say, if we’d asked the same questions about the worst decades during the Trump administration, Hsu’s work suggests the partisan gap could have shriveled or even flipped eyeglasses over teakettle.

To understand the swings, Hsu and her friends spent the first part of 2024 asking 2,400 Americans where they get their information about the economy. In a new analysis , she found Republicans who listen to partisan outlets are more likely to be negative, and Democrats who listen to their own version of such news are more positive — and that Republicans are a bit more likely to follow partisan news.

But while Fox and friends drive some negativity, only a fifth of Republicans get their economic news from partisan outlets. And Democrats and independents give a thumbs down to the current decade, too, albeit at much lower rates.

There’s clearly something more fundamental at work. As YouGov’s Carl Bialik points out, when Americans were asked last year which decade they’d most want to live in, the most common answer was now. At some level then, it seems unlikely that we truly believe this decade stinks by almost every measure.

A deeper explanation didn’t land in our laps until halfway through a Zoom call with four well-caffeinated Australian marketing and consumer-behavior researchers: the Ehrenberg-Bass folks behind the music study we cited above. (Their antipodean academic institute has attracted massive sponsorships by replacing typical corporate marketing fluffery with actual evidence.)

Their analysis began when Callum Davies needed to better understand the demographics of American music tastes to interpret streaming data for his impending dissertation. Since they were already asking folks about music, Davies and his colleagues decided they might as well seize the opportunity to update landmark research from Holbrook and Robert Schindler about music nostalgia.

Building on the American scholars’ methods, they asked respondents to listen to a few seconds each of 34 songs , including Justin Timberlake’s “Sexy Back” and Johnny Preston’s “ Running Bear .” Then respondents were asked to rate each song on a zero-to-10 scale. (In the latter case, we can’t imagine the high end of the scale got much use, especially if the excerpt included that song’s faux-tribal “hooga-hooga” chant and/or its climactic teen drownings.)

Together, the songs represented top-10 selections from every even-numbered year from 1950 (Bing and Gary Crosby’s “Play a Simple Melody”) to 2016 (Rihanna’s “Work”), allowing researchers to gather our preferences for music released throughout our lives.

Like us, they found that you’ll forever prefer the music of your late teens. But their results show one big difference: There’s no sudden surge of negative ratings for the most recent music.

Marketing researcher Bill Page said that by broadly asking when music, sports or crime were worst, instead of getting ratings for specific years or items, YouGov got answers to a question they didn’t ask.

“When you ask about ‘worst,’ you’re not asking for an actual opinion,” Page said. “You’re asking, ‘Are you predisposed to think things get worse?’”

“There’s plenty of times surveys unintentionally don’t measure what they claim to,” his colleague Zac Anesbury added.

YouGov actually measured what academics call “declinism,” his bigwig colleague Carl Driesener explained. He looked a tiny bit offended when we asked if that was a real term or slang they’d coined on the spot. But in our defense, only a few minutes had passed since they had claimed “cozzie livs” was Australian for “the cost of living crisis.”

Declinists believe the world keeps getting worse. It’s often the natural result of rosy retrospection, or the idea that everything — with the possible exception of “Running Bear” — looks better in memory than it did at the time. This may happen in part because remembering the good bits of the past can help us through difficult times, Page said.

It’s a well-established phenomenon in psychology, articulated by Leigh Thompson, Terence Mitchell and their collaborators in a set of analyses . They found that when asked to rate a trip mid-vacation, we often sound disappointed. But after we get home — when the lost luggage has been found and the biting-fly welts have stopped itching — we’re as positive about the trip as we were in the early planning stage. Sometimes even more so.

So saying the 2020s are the worst decade ever is akin to sobbing about “the worst goldang trip ever” at 3 a.m . in a sketchy flophouse full of Russian-speaking truckers after you’ve run out of cash and spent three days racing around Urumqi looking for the one bank in Western China that takes international cards.

A few decades from now, our memories shaped by grainy photos of auroras and astrolabes, we’ll recall only the bread straight from streetside tandoor-style ovens and the locals who went out of their way to bail out a couple of distraught foreigners.

In other words, the 2020s will be the good old days.

Greetings! The Department of Data curates queries. What are you curious about: How many islands have been completely de-ratted? Where is America’s disc-golf heartland? Who goes to summer camp? Just ask!

If your question inspires a column, we’ll send you an official Department of Data button and ID card. This week’s buttons go to YouGov’s Taylor Orth, who correctly deduced we’d be fascinated by decade-related polls, and Stephanie Killian in Kennesaw, Ga., who also got a button for our music column , with her questions about how many people cling to the music of their youth.

survey research analysis meaning

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Israeli Views of the Israel-Hamas War

Jewish Israelis and Arab Israelis see the war very differently

Table of contents.

  • Views of the Israeli military response against Hamas
  • Attitudes toward Israel’s war cabinet
  • Current concerns about the war
  • Confidence in Biden
  • Views of how Biden is handling the Israel-Hamas war
  • Who is Biden favoring in the conflict, or is he striking the right balance?
  • Views of the U.S.
  • Who Israelis want to play a role in diplomatically resolving the war
  • Success against Hamas
  • Israel’s future national security
  • The future of Gaza
  • Views of Palestinian leaders
  • Palestinian statehood and coexistence
  • Acknowledgments
  • Methodology

survey research analysis meaning

This Pew Research Center analysis covers Israeli attitudes on the Israel-Hamas war, including opinions on how it’s being conducted, the country’s future, Israeli political leaders and the United States’ role in the conflict.

The data is from a survey of 1,001 Israeli adults conducted face-to-face from March 3 to April 4, 2024. Interviews were conducted in Hebrew and Arabic, and the survey is representative of the adult population ages 18 and older, excluding those in East Jerusalem and non-sanctioned outposts. (The survey also did not cover the West Bank or Gaza.) The survey included an oversample of Arabs in Israel. It was subsequently weighted to be representative of the Israeli adult population with the following variables: gender by ethnicity, age by ethnicity, education, region, urbanicity and probability of selection of respondent.

Here are the questions used for the report, along with responses, and the survey methodology .

A bar chart showing that Israelis are divided over the country’s military response against Hamas in Gaza

A new Pew Research Center survey finds that 39% of Israelis say Israel’s military response against Hamas in Gaza has been about right, while 34% say it has not gone far enough and 19% think it has gone too far.

According to the survey, conducted in March and early April, roughly two-thirds of Israelis are also confident that Israel will either probably (27%) or definitely (40%) achieve its goals in the war against Hamas. Still, majorities of Israeli adults are worried about aspects of the ongoing war: 

  • 61% say they are extremely or very concerned about the war expanding into other countries in the region.
  • 68% say they are extremely or very concerned about the war going on for a long time.

When it comes to what should happen after the war, there is less consensus. A 40% plurality of Israelis think Israel should govern the Gaza Strip. Smaller shares think Gazans should decide who governs (14%) or would like to see a Palestinian Authority national unity government either with (6%) or without (12%) President Mahmoud Abbas (also known as Abu Mazen) in leadership.

Separately, 26% of Israelis think a way can be found for Israel and an independent Palestinian state to coexist peacefully with each other – down from  35% who said the same last year , prior to the war, and about half as many as took that position when the question was first asked in 2013.

Research in the West Bank and Gaza

Pew Research Center has polled the Palestinian territories in previous years, but we were unable to conduct fieldwork in Gaza or the West Bank for our March/April 2024 survey due to security concerns. We are actively investigating possibilities for both qualitative and quantitative research on public opinion in the region and hope to be able to provide more data in the coming months.

These are among the key findings of a new survey of 1,001 Israelis, conducted via face-to-face interviews from March 3 to April 4, 2024.

The survey also asked Israelis about the U.S. role in the conflict. (It was conducted before U.S. President Joe Biden took a tougher stance toward Israel in the wake of an Israeli airstrike that killed seven World Central Kitchen aid workers. And it predates Biden’s declaration that the U.S. would not provide offensive weapons to Israel in the event of a Rafah invasion as well as the subsequent Israeli strikes in Rafah .)

The survey shows:

  • 60% of Israelis disapprove of the way Biden is handling the Israel-Hamas war.
  • 41% think Biden is striking the right balance between Israelis and Palestinians. Still, 27% of Israelis say he is favoring Israelis too much, while roughly the same share (25%) say he favors Palestinians too much.
  • Most Israelis express confidence in Biden to handle world affairs and have a favorable view of the U.S. But ratings of both Biden and the U.S. have fallen at least 10 percentage points since last year. (For more on this, read “How Israelis and Americans view one another and the U.S. role in the Israel-Hamas war.” )

A bar chart showing that a Majority of Israelis want the U.S. to play a major role in diplomatically ending the war

Nonetheless, a large majority (72%) still want the U.S. to play a major role in diplomatically resolving the war – more than say the same about any of the other countries or organizations asked about, including Egypt (45%), Saudi Arabia (29%), Qatar (27%) and the United Nations (24%).

Arab and Jewish Israelis

A dot plot showing that Israeli Arabs and Jews diverge sharply over views of the U.S., Israel-Hamas war and Biden’s handling of it

People across Israeli society perceive the war in vastly different ways, depending on their views of the current leadership, how they identify ideologically, their religious backgrounds and other factors. One of the starkest divides is between Arab and Jewish Israelis:

  • Arab Israelis are less likely than Jewish Israelis to think Israel will succeed in achieving its war aims (38% vs. 76%) and less optimistic when thinking about the future of the country’s national security (21% vs. 63%).
  • Israeli Arabs are much more likely than Jews to say the country’s military response has gone too far (74% vs. 4%).
  • Almost no Israeli Arabs (3%) want Israel to govern the Gaza Strip after the war, while half of Israeli Jews think it should do so. A plurality of Arabs would like the people who live in Gaza to decide who governs (37%), while only 8% of Jews prefer this outcome.
  • Arab Israelis have much less favorable views of the U.S. than Jewish Israelis do (29% vs. 90%), as well as less confidence in Biden (21% vs. 66%). They are also much more likely to disapprove of Biden’s handling of the war (86% vs. 53%) and to think he favors Israelis too much (86% vs. 11%).
  • Although a majority of Arabs (63%) want the U.S. to play a major role in diplomatically resolving the war between Israel and Hamas, an even greater share of Jewish Israelis (74%) want this. And roughly two-thirds of Arabs are open to Qatar and Egypt playing a major role, while only about four-in-ten Jews or fewer say the same.
  • Roughly nine-in-ten Arabs (92%) have a negative view of Israeli Prime Minister Benjamin Netanyahu, compared with around half of Jews (48%). Views of the two other war cabinet members , Benny Gantz and Yoav Gallant, are also divided along ethnic lines. (The survey was conducted before Gantz threatened to leave the war cabinet .)

In many cases, there are also large ideological differences, with Israelis who describe themselves as being on the left generally more critical of Israel’s war response, less optimistic about its success and more critical of the U.S. than those on the right. There also tend to be differences among Israeli Jews based on how religiously observant they are. For more on how we looked at these differences, refer to the box below.

Jewish religious groups in Israel: Haredim, Datiim, Masortim and Hilonim

Nearly all Israeli Jews identify as either Haredi (commonly translated as “ultra-Orthodox”), Dati (“religious”), Masorti (“traditional”) or Hiloni (“secular”). The spectrum of religious observance in Israel – on which Haredim are generally the most religious and Hilonim the least – does not always line up perfectly with Israel’s political spectrum. On some issues, including those pertaining to religion in public life, there is a clear overlap: Haredim are furthest to the right, and Hilonim are furthest to the left, with Datiim and Masortim in between. But on other political issues, including those related to the Israeli-Palestinian conflict and views of the United States, differences between religious groups do not always mirror those between people at different points on the ideological spectrum. Because of sample size considerations, we combine Haredim and Datiim for analysis in this report.

For more information on the different views of these religious groups, read the Center’s 2016 deep dive on the topic, “Israel’s Religiously Divided Society.”

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • War & International Conflict
  • World Leaders

How Americans and Israelis view one another and the U.S. role in the Israel-Hamas war

Growing partisan divisions over nato and ukraine, a growing share of americans have little or no confidence in netanyahu, what are americans’ top foreign policy priorities, rising numbers of americans say jews and muslims face a lot of discrimination, most popular, report materials.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

COMMENTS

  1. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  2. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  3. Survey Research: Definition, Examples and Methods

    Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...

  4. Survey Research: Definition, Examples & Methods

    Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall.. As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions.

  5. Survey data analysis and best practices for reporting

    Survey analysis is the process of turning the raw material of your survey data into insights and answers you can use to improve things for your business. ... Cross-tabulation is a valuable step in sifting through your data and uncovering its meaning. When you cross-tabulate, you're breaking out your data according to the sub-groups within ...

  6. Survey Research

    Survey Research. Definition: Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

  7. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  8. Survey Research: Definition, Types & Methods

    Descriptive research is the most common and conclusive form of survey research due to its quantitative nature. Unlike exploratory research methods, descriptive research utilizes pre-planned, structured surveys with closed-ended questions. It's also deductive, meaning that the survey structure and questions are determined beforehand based on existing theories or areas of inquiry.

  9. Survey Research: Definition, Methods, Examples, and More

    Survey research, as a key research method of marketing research, is defined as the systematic collection and analysis of data gathered from respondent feedback through questionnaires or interviews. This primary research method is designed to gather information about individuals' opinions, behaviors, or characteristics through a series of ...

  10. How to analyze survey data: Methods & examples

    With its many data analysis techniques, SurveyMonkey makes it easy for you to turn your raw data into actionable insights presented in easy-to-grasp formats.Features such as automatic charts and graphs and word clouds help bring data to life. For instance, Sentiment Analysis allows you to get an instant summary of how people feel from thousands or even millions of open text responses.

  11. Survey research

    9 Survey research. Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930 ...

  12. What is Survey Analysis? An Explainer from Insight Platforms

    Survey Analysis is the process of turning the research data into actionable, business oriented insights. This article explains what survey analysis is and introduces some of the tools and methods to help you turn survey research data into relevant and actionable insights. Not long ago, doing survey research was costly and needed a specialist ...

  13. (PDF) Understanding and Evaluating Survey Research

    Survey research is defined as. "the collection of information from. a sample of individuals through their. responses to questions" (Check &. Schutt, 2012, p. 160). This type of r e -. search ...

  14. PDF Essentials of Survey Research and Analysis

    Surveys (also called "questionnaires") are a systematic way of asking people to volunteer information about their attitudes, behaviors, opinions and beliefs. The success of survey research rests on how closely the answers that people give to survey questions matches reality - that is, how people really think and act.

  15. Survey Research

    Survey research is a popular and powerful means by which to study people and organizations in society. It consists of a rich set of techniques used to obtain information about individual attitudes, values, behaviors, opinions, knowledge, and circumstances.

  16. PDF Fundamentals of Survey Research Methodology

    The survey is then constructed to test this model against observations of the phenomena. In contrast to survey research, a . survey. is simply a data collection tool for carrying out survey research. Pinsonneault and Kraemer (1993) defined a survey as a "means for gathering information about the characteristics, actions, or opinions of a ...

  17. Designing, Conducting, and Reporting Survey Studies: A Primer for

    Burns et al., 2008 12. A guide for the design and conduct of self-administered surveys of clinicians. This guide includes statements on designing, conducting, and reporting web- and non-web-based surveys of clinicians' knowledge, attitude, and practice. The statements are based on a literature review, but not the Delphi method.

  18. Survey Research: A Quantitative Technique

    The versatility A feature of survey research meaning that many different people use surveys for a variety of purposes and in a variety of settings. of survey research is also an asset. Surveys are used by all kinds of people in all kinds of professions. ... Another form of univariate analysis that survey researchers can conduct on single ...

  19. Data Analysis in Research: Types & Methods

    Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. Three essential things occur during the data ...

  20. Research Methods

    To analyze data collected in a statistically valid manner (e.g. from experiments, surveys, and observations). Meta-analysis. Quantitative. To statistically analyze the results of a large collection of studies. Can only be applied to studies that collected data in a statistically valid manner.

  21. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  22. Survey methodology

    Survey methodology is "the study of survey methods". As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys.

  23. PDF FDIC

    FDIC

  24. Google Forms: How to Use the Free Online Form Creator

    Google Forms is a free online software for creating surveys and questionnaires. You need a Google account to create a Google Form, but anyone can fill out a Google Form. You can personalize your ...

  25. Social Media Fact Sheet

    How we did this. To better understand Americans' social media use, Pew Research Center surveyed 5,733 U.S. adults from May 19 to Sept. 5, 2023. Ipsos conducted this National Public Opinion Reference Survey (NPORS) for the Center using address-based sampling and a multimode protocol that included both web and mail.

  26. How to Do Thematic Analysis

    When to use thematic analysis. Thematic analysis is a good approach to research where you're trying to find out something about people's views, opinions, knowledge, experiences or values from a set of qualitative data - for example, interview transcripts, social media profiles, or survey responses. Some types of research questions you might use thematic analysis to answer:

  27. Analysis

    America's best decade, according to data. One simple variable, more than anything, determines when you think the nation peaked. Mike Lee and his daughter, Zoey, play at Alethia Tanner Park in D ...

  28. Israel-Hamas War: How Israelis See It

    This Pew Research Center analysis covers Israeli attitudes on the Israel-Hamas war, including opinions on how it's being conducted, the country's future, Israeli political leaders and the United States' role in the conflict. The data is from a survey of 1,001 Israeli adults conducted face-to-face from March 3 to April 4, 2024.

  29. Adding an Aquatic Prey Fish Module within the Everglades Vulnerability

    Ridge and slough landscape within the Everglades. Image credit: Everglades National Park | Flickr. Methodology for Addressing the Issue: We will use Bayesian networks to build a spatially explicit EVA module based on current knowledge and existing data on fish density and biomass trends on the landscape.Everglades fish experts will be consulted to determine the best model parameterizations and ...

  30. B2B Content Marketing Trends 2024 [Research]

    New research into B2B content marketing trends for 2024 reveals specifics of AI implementation, social media use, and budget forecasts, plus content success factors. ... These numbers come from a July 2023 survey of marketers around the globe. We received 1,080 responses. ... "AI will be an industry sea change and strongly impact the meaning ...