• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

data analysis procedure in research example

Home Market Research

Data Analysis in Research: Types & Methods

data-analysis-in-research

Content Index

Why analyze data in research?

Types of data in research, finding patterns in the qualitative data, methods used for data analysis in qualitative research, preparing data for analysis, methods used for data analysis in quantitative research, considerations in research data analysis, what is data analysis in research.

Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. 

Three essential things occur during the data analysis process — the first is data organization . Summarization and categorization together contribute to becoming the second known method used for data reduction. It helps find patterns and themes in the data for easy identification and linking. The third and last way is data analysis – researchers do it in both top-down and bottom-up fashion.

LEARN ABOUT: Research Process Steps

On the other hand, Marshall and Rossman describe data analysis as a messy, ambiguous, and time-consuming but creative and fascinating process through which a mass of collected data is brought to order, structure and meaning.

We can say that “the data analysis and data interpretation is a process representing the application of deductive and inductive logic to the research and data analysis.”

Researchers rely heavily on data as they have a story to tell or research problems to solve. It starts with a question, and data is nothing but an answer to that question. But, what if there is no question to ask? Well! It is possible to explore data even without a problem – we call it ‘Data Mining’, which often reveals some interesting patterns within the data that are worth exploring.

Irrelevant to the type of data researchers explore, their mission and audiences’ vision guide them to find the patterns to shape the story they want to tell. One of the essential things expected from researchers while analyzing data is to stay open and remain unbiased toward unexpected patterns, expressions, and results. Remember, sometimes, data analysis tells the most unforeseen yet exciting stories that were not expected when initiating data analysis. Therefore, rely on the data you have at hand and enjoy the journey of exploratory research. 

Create a Free Account

Every kind of data has a rare quality of describing things after assigning a specific value to it. For analysis, you need to organize these values, processed and presented in a given context, to make it useful. Data can be in different forms; here are the primary data types.

  • Qualitative data: When the data presented has words and descriptions, then we call it qualitative data . Although you can observe this data, it is subjective and harder to analyze data in research, especially for comparison. Example: Quality data represents everything describing taste, experience, texture, or an opinion that is considered quality data. This type of data is usually collected through focus groups, personal qualitative interviews , qualitative observation or using open-ended questions in surveys.
  • Quantitative data: Any data expressed in numbers of numerical figures are called quantitative data . This type of data can be distinguished into categories, grouped, measured, calculated, or ranked. Example: questions such as age, rank, cost, length, weight, scores, etc. everything comes under this type of data. You can present such data in graphical format, charts, or apply statistical analysis methods to this data. The (Outcomes Measurement Systems) OMS questionnaires in surveys are a significant source of collecting numeric data.
  • Categorical data: It is data presented in groups. However, an item included in the categorical data cannot belong to more than one group. Example: A person responding to a survey by telling his living style, marital status, smoking habit, or drinking habit comes under the categorical data. A chi-square test is a standard method used to analyze this data.

Learn More : Examples of Qualitative Data in Education

Data analysis in qualitative research

Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process. Hence it is typically used for exploratory research and data analysis .

Although there are several ways to find patterns in the textual information, a word-based method is the most relied and widely used global technique for research and data analysis. Notably, the data analysis process in qualitative research is manual. Here the researchers usually read the available data and find repetitive or commonly used words. 

For example, while studying data collected from African countries to understand the most pressing issues people face, researchers might find  “food”  and  “hunger” are the most commonly used words and will highlight them for further analysis.

LEARN ABOUT: Level of Analysis

The keyword context is another widely used word-based technique. In this method, the researcher tries to understand the concept by analyzing the context in which the participants use a particular keyword.  

For example , researchers conducting research and data analysis for studying the concept of ‘diabetes’ amongst respondents might analyze the context of when and how the respondent has used or referred to the word ‘diabetes.’

The scrutiny-based technique is also one of the highly recommended  text analysis  methods used to identify a quality data pattern. Compare and contrast is the widely used method under this technique to differentiate how a specific text is similar or different from each other. 

For example: To find out the “importance of resident doctor in a company,” the collected data is divided into people who think it is necessary to hire a resident doctor and those who think it is unnecessary. Compare and contrast is the best method that can be used to analyze the polls having single-answer questions types .

Metaphors can be used to reduce the data pile and find patterns in it so that it becomes easier to connect data with theory.

Variable Partitioning is another technique used to split variables so that researchers can find more coherent descriptions and explanations from the enormous data.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

There are several techniques to analyze the data in qualitative research, but here are some commonly used methods,

  • Content Analysis:  It is widely accepted and the most frequently employed technique for data analysis in research methodology. It can be used to analyze the documented information from text, images, and sometimes from the physical items. It depends on the research questions to predict when and where to use this method.
  • Narrative Analysis: This method is used to analyze content gathered from various sources such as personal interviews, field observation, and  surveys . The majority of times, stories, or opinions shared by people are focused on finding answers to the research questions.
  • Discourse Analysis:  Similar to narrative analysis, discourse analysis is used to analyze the interactions with people. Nevertheless, this particular method considers the social context under which or within which the communication between the researcher and respondent takes place. In addition to that, discourse analysis also focuses on the lifestyle and day-to-day environment while deriving any conclusion.
  • Grounded Theory:  When you want to explain why a particular phenomenon happened, then using grounded theory for analyzing quality data is the best resort. Grounded theory is applied to study data about the host of similar cases occurring in different settings. When researchers are using this method, they might alter explanations or produce new ones until they arrive at some conclusion.

LEARN ABOUT: 12 Best Tools for Researchers

Data analysis in quantitative research

The first stage in research and data analysis is to make it for the analysis so that the nominal data can be converted into something meaningful. Data preparation consists of the below phases.

Phase I: Data Validation

Data validation is done to understand if the collected data sample is per the pre-set standards, or it is a biased data sample again divided into four different stages

  • Fraud: To ensure an actual human being records each response to the survey or the questionnaire
  • Screening: To make sure each participant or respondent is selected or chosen in compliance with the research criteria
  • Procedure: To ensure ethical standards were maintained while collecting the data sample
  • Completeness: To ensure that the respondent has answered all the questions in an online survey. Else, the interviewer had asked all the questions devised in the questionnaire.

Phase II: Data Editing

More often, an extensive research data sample comes loaded with errors. Respondents sometimes fill in some fields incorrectly or sometimes skip them accidentally. Data editing is a process wherein the researchers have to confirm that the provided data is free of such errors. They need to conduct necessary checks and outlier checks to edit the raw edit and make it ready for analysis.

Phase III: Data Coding

Out of all three, this is the most critical phase of data preparation associated with grouping and assigning values to the survey responses . If a survey is completed with a 1000 sample size, the researcher will create an age bracket to distinguish the respondents based on their age. Thus, it becomes easier to analyze small data buckets rather than deal with the massive data pile.

LEARN ABOUT: Steps in Qualitative Research

After the data is prepared for analysis, researchers are open to using different research and data analysis methods to derive meaningful insights. For sure, statistical analysis plans are the most favored to analyze numerical data. In statistical analysis, distinguishing between categorical data and numerical data is essential, as categorical data involves distinct categories or labels, while numerical data consists of measurable quantities. The method is again classified into two groups. First, ‘Descriptive Statistics’ used to describe data. Second, ‘Inferential statistics’ that helps in comparing the data .

Descriptive statistics

This method is used to describe the basic features of versatile types of data in research. It presents the data in such a meaningful way that pattern in the data starts making sense. Nevertheless, the descriptive analysis does not go beyond making conclusions. The conclusions are again based on the hypothesis researchers have formulated so far. Here are a few major types of descriptive analysis methods.

Measures of Frequency

  • Count, Percent, Frequency
  • It is used to denote home often a particular event occurs.
  • Researchers use it when they want to showcase how often a response is given.

Measures of Central Tendency

  • Mean, Median, Mode
  • The method is widely used to demonstrate distribution by various points.
  • Researchers use this method when they want to showcase the most commonly or averagely indicated response.

Measures of Dispersion or Variation

  • Range, Variance, Standard deviation
  • Here the field equals high/low points.
  • Variance standard deviation = difference between the observed score and mean
  • It is used to identify the spread of scores by stating intervals.
  • Researchers use this method to showcase data spread out. It helps them identify the depth until which the data is spread out that it directly affects the mean.

Measures of Position

  • Percentile ranks, Quartile ranks
  • It relies on standardized scores helping researchers to identify the relationship between different scores.
  • It is often used when researchers want to compare scores with the average count.

For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell. For example, the mean is the best way to demonstrate the students’ average scores in schools. It is better to rely on the descriptive statistics when the researchers intend to keep the research or outcome limited to the provided  sample  without generalizing it. For example, when you want to compare average voting done in two different cities, differential statistics are enough.

Descriptive analysis is also called a ‘univariate analysis’ since it is commonly used to analyze a single variable.

Inferential statistics

Inferential statistics are used to make predictions about a larger population after research and data analysis of the representing population’s collected sample. For example, you can ask some odd 100 audiences at a movie theater if they like the movie they are watching. Researchers then use inferential statistics on the collected  sample  to reason that about 80-90% of people like the movie. 

Here are two significant areas of inferential statistics.

  • Estimating parameters: It takes statistics from the sample research data and demonstrates something about the population parameter.
  • Hypothesis test: I t’s about sampling research data to answer the survey research questions. For example, researchers might be interested to understand if the new shade of lipstick recently launched is good or not, or if the multivitamin capsules help children to perform better at games.

These are sophisticated analysis methods used to showcase the relationship between different variables instead of describing a single variable. It is often used when researchers want something beyond absolute numbers to understand the relationship between variables.

Here are some of the commonly used methods for data analysis in research.

  • Correlation: When researchers are not conducting experimental research or quasi-experimental research wherein the researchers are interested to understand the relationship between two or more variables, they opt for correlational research methods.
  • Cross-tabulation: Also called contingency tables,  cross-tabulation  is used to analyze the relationship between multiple variables.  Suppose provided data has age and gender categories presented in rows and columns. A two-dimensional cross-tabulation helps for seamless data analysis and research by showing the number of males and females in each age category.
  • Regression analysis: For understanding the strong relationship between two variables, researchers do not look beyond the primary and commonly used regression analysis method, which is also a type of predictive analysis used. In this method, you have an essential factor called the dependent variable. You also have multiple independent variables in regression analysis. You undertake efforts to find out the impact of independent variables on the dependent variable. The values of both independent and dependent variables are assumed as being ascertained in an error-free random manner.
  • Frequency tables: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Analysis of variance: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Researchers must have the necessary research skills to analyze and manipulation the data , Getting trained to demonstrate a high standard of research practice. Ideally, researchers must possess more than a basic understanding of the rationale of selecting one statistical method over the other to obtain better data insights.
  • Usually, research and data analytics projects differ by scientific discipline; therefore, getting statistical advice at the beginning of analysis helps design a survey questionnaire, select data collection methods , and choose samples.

LEARN ABOUT: Best Data Collection Tools

  • The primary aim of data research and analysis is to derive ultimate insights that are unbiased. Any mistake in or keeping a biased mind to collect data, selecting an analysis method, or choosing  audience  sample il to draw a biased inference.
  • Irrelevant to the sophistication used in research data and analysis is enough to rectify the poorly defined objective outcome measurements. It does not matter if the design is at fault or intentions are not clear, but lack of clarity might mislead readers, so avoid the practice.
  • The motive behind data analysis in research is to present accurate and reliable data. As far as possible, avoid statistical errors, and find a way to deal with everyday challenges like outliers, missing data, data altering, data mining , or developing graphical representation.

LEARN MORE: Descriptive Research vs Correlational Research The sheer amount of data generated daily is frightening. Especially when data analysis has taken center stage. in 2018. In last year, the total data supply amounted to 2.8 trillion gigabytes. Hence, it is clear that the enterprises willing to survive in the hypercompetitive world must possess an excellent capability to analyze complex research data, derive actionable insights, and adapt to the new market needs.

LEARN ABOUT: Average Order Value

QuestionPro is an online survey platform that empowers organizations in data analysis and research and provides them a medium to collect data by creating appealing surveys.

MORE LIKE THIS

Life@QuestionPro: The Journey of Kristie Lawrence

Life@QuestionPro: The Journey of Kristie Lawrence

Jun 7, 2024

We are on the front end of an innovation that can help us better predict how to transform our customer interactions.

How Can I Help You? — Tuesday CX Thoughts

Jun 5, 2024

data analysis procedure in research example

Why Multilingual 360 Feedback Surveys Provide Better Insights

Jun 3, 2024

Raked Weighting

Raked Weighting: A Key Tool for Accurate Survey Results

May 31, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

PW Skills | Blog

Data Analysis Techniques in Research – Methods, Tools & Examples

' src=

Varun Saharawat is a seasoned professional in the fields of SEO and content writing. With a profound knowledge of the intricate aspects of these disciplines, Varun has established himself as a valuable asset in the world of digital marketing and online content creation.

data analysis techniques in research

Data analysis techniques in research are essential because they allow researchers to derive meaningful insights from data sets to support their hypotheses or research objectives.

Data Analysis Techniques in Research : While various groups, institutions, and professionals may have diverse approaches to data analysis, a universal definition captures its essence. Data analysis involves refining, transforming, and interpreting raw data to derive actionable insights that guide informed decision-making for businesses.

Data Analytics Course

A straightforward illustration of data analysis emerges when we make everyday decisions, basing our choices on past experiences or predictions of potential outcomes.

If you want to learn more about this topic and acquire valuable skills that will set you apart in today’s data-driven world, we highly recommend enrolling in the Data Analytics Course by Physics Wallah . And as a special offer for our readers, use the coupon code “READER” to get a discount on this course.

Table of Contents

What is Data Analysis?

Data analysis is the systematic process of inspecting, cleaning, transforming, and interpreting data with the objective of discovering valuable insights and drawing meaningful conclusions. This process involves several steps:

  • Inspecting : Initial examination of data to understand its structure, quality, and completeness.
  • Cleaning : Removing errors, inconsistencies, or irrelevant information to ensure accurate analysis.
  • Transforming : Converting data into a format suitable for analysis, such as normalization or aggregation.
  • Interpreting : Analyzing the transformed data to identify patterns, trends, and relationships.

Types of Data Analysis Techniques in Research

Data analysis techniques in research are categorized into qualitative and quantitative methods, each with its specific approaches and tools. These techniques are instrumental in extracting meaningful insights, patterns, and relationships from data to support informed decision-making, validate hypotheses, and derive actionable recommendations. Below is an in-depth exploration of the various types of data analysis techniques commonly employed in research:

1) Qualitative Analysis:

Definition: Qualitative analysis focuses on understanding non-numerical data, such as opinions, concepts, or experiences, to derive insights into human behavior, attitudes, and perceptions.

  • Content Analysis: Examines textual data, such as interview transcripts, articles, or open-ended survey responses, to identify themes, patterns, or trends.
  • Narrative Analysis: Analyzes personal stories or narratives to understand individuals’ experiences, emotions, or perspectives.
  • Ethnographic Studies: Involves observing and analyzing cultural practices, behaviors, and norms within specific communities or settings.

2) Quantitative Analysis:

Quantitative analysis emphasizes numerical data and employs statistical methods to explore relationships, patterns, and trends. It encompasses several approaches:

Descriptive Analysis:

  • Frequency Distribution: Represents the number of occurrences of distinct values within a dataset.
  • Central Tendency: Measures such as mean, median, and mode provide insights into the central values of a dataset.
  • Dispersion: Techniques like variance and standard deviation indicate the spread or variability of data.

Diagnostic Analysis:

  • Regression Analysis: Assesses the relationship between dependent and independent variables, enabling prediction or understanding causality.
  • ANOVA (Analysis of Variance): Examines differences between groups to identify significant variations or effects.

Predictive Analysis:

  • Time Series Forecasting: Uses historical data points to predict future trends or outcomes.
  • Machine Learning Algorithms: Techniques like decision trees, random forests, and neural networks predict outcomes based on patterns in data.

Prescriptive Analysis:

  • Optimization Models: Utilizes linear programming, integer programming, or other optimization techniques to identify the best solutions or strategies.
  • Simulation: Mimics real-world scenarios to evaluate various strategies or decisions and determine optimal outcomes.

Specific Techniques:

  • Monte Carlo Simulation: Models probabilistic outcomes to assess risk and uncertainty.
  • Factor Analysis: Reduces the dimensionality of data by identifying underlying factors or components.
  • Cohort Analysis: Studies specific groups or cohorts over time to understand trends, behaviors, or patterns within these groups.
  • Cluster Analysis: Classifies objects or individuals into homogeneous groups or clusters based on similarities or attributes.
  • Sentiment Analysis: Uses natural language processing and machine learning techniques to determine sentiment, emotions, or opinions from textual data.

Also Read: AI and Predictive Analytics: Examples, Tools, Uses, Ai Vs Predictive Analytics

Data Analysis Techniques in Research Examples

To provide a clearer understanding of how data analysis techniques are applied in research, let’s consider a hypothetical research study focused on evaluating the impact of online learning platforms on students’ academic performance.

Research Objective:

Determine if students using online learning platforms achieve higher academic performance compared to those relying solely on traditional classroom instruction.

Data Collection:

  • Quantitative Data: Academic scores (grades) of students using online platforms and those using traditional classroom methods.
  • Qualitative Data: Feedback from students regarding their learning experiences, challenges faced, and preferences.

Data Analysis Techniques Applied:

1) Descriptive Analysis:

  • Calculate the mean, median, and mode of academic scores for both groups.
  • Create frequency distributions to represent the distribution of grades in each group.

2) Diagnostic Analysis:

  • Conduct an Analysis of Variance (ANOVA) to determine if there’s a statistically significant difference in academic scores between the two groups.
  • Perform Regression Analysis to assess the relationship between the time spent on online platforms and academic performance.

3) Predictive Analysis:

  • Utilize Time Series Forecasting to predict future academic performance trends based on historical data.
  • Implement Machine Learning algorithms to develop a predictive model that identifies factors contributing to academic success on online platforms.

4) Prescriptive Analysis:

  • Apply Optimization Models to identify the optimal combination of online learning resources (e.g., video lectures, interactive quizzes) that maximize academic performance.
  • Use Simulation Techniques to evaluate different scenarios, such as varying student engagement levels with online resources, to determine the most effective strategies for improving learning outcomes.

5) Specific Techniques:

  • Conduct Factor Analysis on qualitative feedback to identify common themes or factors influencing students’ perceptions and experiences with online learning.
  • Perform Cluster Analysis to segment students based on their engagement levels, preferences, or academic outcomes, enabling targeted interventions or personalized learning strategies.
  • Apply Sentiment Analysis on textual feedback to categorize students’ sentiments as positive, negative, or neutral regarding online learning experiences.

By applying a combination of qualitative and quantitative data analysis techniques, this research example aims to provide comprehensive insights into the effectiveness of online learning platforms.

Also Read: Learning Path to Become a Data Analyst in 2024

Data Analysis Techniques in Quantitative Research

Quantitative research involves collecting numerical data to examine relationships, test hypotheses, and make predictions. Various data analysis techniques are employed to interpret and draw conclusions from quantitative data. Here are some key data analysis techniques commonly used in quantitative research:

1) Descriptive Statistics:

  • Description: Descriptive statistics are used to summarize and describe the main aspects of a dataset, such as central tendency (mean, median, mode), variability (range, variance, standard deviation), and distribution (skewness, kurtosis).
  • Applications: Summarizing data, identifying patterns, and providing initial insights into the dataset.

2) Inferential Statistics:

  • Description: Inferential statistics involve making predictions or inferences about a population based on a sample of data. This technique includes hypothesis testing, confidence intervals, t-tests, chi-square tests, analysis of variance (ANOVA), regression analysis, and correlation analysis.
  • Applications: Testing hypotheses, making predictions, and generalizing findings from a sample to a larger population.

3) Regression Analysis:

  • Description: Regression analysis is a statistical technique used to model and examine the relationship between a dependent variable and one or more independent variables. Linear regression, multiple regression, logistic regression, and nonlinear regression are common types of regression analysis .
  • Applications: Predicting outcomes, identifying relationships between variables, and understanding the impact of independent variables on the dependent variable.

4) Correlation Analysis:

  • Description: Correlation analysis is used to measure and assess the strength and direction of the relationship between two or more variables. The Pearson correlation coefficient, Spearman rank correlation coefficient, and Kendall’s tau are commonly used measures of correlation.
  • Applications: Identifying associations between variables and assessing the degree and nature of the relationship.

5) Factor Analysis:

  • Description: Factor analysis is a multivariate statistical technique used to identify and analyze underlying relationships or factors among a set of observed variables. It helps in reducing the dimensionality of data and identifying latent variables or constructs.
  • Applications: Identifying underlying factors or constructs, simplifying data structures, and understanding the underlying relationships among variables.

6) Time Series Analysis:

  • Description: Time series analysis involves analyzing data collected or recorded over a specific period at regular intervals to identify patterns, trends, and seasonality. Techniques such as moving averages, exponential smoothing, autoregressive integrated moving average (ARIMA), and Fourier analysis are used.
  • Applications: Forecasting future trends, analyzing seasonal patterns, and understanding time-dependent relationships in data.

7) ANOVA (Analysis of Variance):

  • Description: Analysis of variance (ANOVA) is a statistical technique used to analyze and compare the means of two or more groups or treatments to determine if they are statistically different from each other. One-way ANOVA, two-way ANOVA, and MANOVA (Multivariate Analysis of Variance) are common types of ANOVA.
  • Applications: Comparing group means, testing hypotheses, and determining the effects of categorical independent variables on a continuous dependent variable.

8) Chi-Square Tests:

  • Description: Chi-square tests are non-parametric statistical tests used to assess the association between categorical variables in a contingency table. The Chi-square test of independence, goodness-of-fit test, and test of homogeneity are common chi-square tests.
  • Applications: Testing relationships between categorical variables, assessing goodness-of-fit, and evaluating independence.

These quantitative data analysis techniques provide researchers with valuable tools and methods to analyze, interpret, and derive meaningful insights from numerical data. The selection of a specific technique often depends on the research objectives, the nature of the data, and the underlying assumptions of the statistical methods being used.

Also Read: Analysis vs. Analytics: How Are They Different?

Data Analysis Methods

Data analysis methods refer to the techniques and procedures used to analyze, interpret, and draw conclusions from data. These methods are essential for transforming raw data into meaningful insights, facilitating decision-making processes, and driving strategies across various fields. Here are some common data analysis methods:

  • Description: Descriptive statistics summarize and organize data to provide a clear and concise overview of the dataset. Measures such as mean, median, mode, range, variance, and standard deviation are commonly used.
  • Description: Inferential statistics involve making predictions or inferences about a population based on a sample of data. Techniques such as hypothesis testing, confidence intervals, and regression analysis are used.

3) Exploratory Data Analysis (EDA):

  • Description: EDA techniques involve visually exploring and analyzing data to discover patterns, relationships, anomalies, and insights. Methods such as scatter plots, histograms, box plots, and correlation matrices are utilized.
  • Applications: Identifying trends, patterns, outliers, and relationships within the dataset.

4) Predictive Analytics:

  • Description: Predictive analytics use statistical algorithms and machine learning techniques to analyze historical data and make predictions about future events or outcomes. Techniques such as regression analysis, time series forecasting, and machine learning algorithms (e.g., decision trees, random forests, neural networks) are employed.
  • Applications: Forecasting future trends, predicting outcomes, and identifying potential risks or opportunities.

5) Prescriptive Analytics:

  • Description: Prescriptive analytics involve analyzing data to recommend actions or strategies that optimize specific objectives or outcomes. Optimization techniques, simulation models, and decision-making algorithms are utilized.
  • Applications: Recommending optimal strategies, decision-making support, and resource allocation.

6) Qualitative Data Analysis:

  • Description: Qualitative data analysis involves analyzing non-numerical data, such as text, images, videos, or audio, to identify themes, patterns, and insights. Methods such as content analysis, thematic analysis, and narrative analysis are used.
  • Applications: Understanding human behavior, attitudes, perceptions, and experiences.

7) Big Data Analytics:

  • Description: Big data analytics methods are designed to analyze large volumes of structured and unstructured data to extract valuable insights. Technologies such as Hadoop, Spark, and NoSQL databases are used to process and analyze big data.
  • Applications: Analyzing large datasets, identifying trends, patterns, and insights from big data sources.

8) Text Analytics:

  • Description: Text analytics methods involve analyzing textual data, such as customer reviews, social media posts, emails, and documents, to extract meaningful information and insights. Techniques such as sentiment analysis, text mining, and natural language processing (NLP) are used.
  • Applications: Analyzing customer feedback, monitoring brand reputation, and extracting insights from textual data sources.

These data analysis methods are instrumental in transforming data into actionable insights, informing decision-making processes, and driving organizational success across various sectors, including business, healthcare, finance, marketing, and research. The selection of a specific method often depends on the nature of the data, the research objectives, and the analytical requirements of the project or organization.

Also Read: Quantitative Data Analysis: Types, Analysis & Examples

Data Analysis Tools

Data analysis tools are essential instruments that facilitate the process of examining, cleaning, transforming, and modeling data to uncover useful information, make informed decisions, and drive strategies. Here are some prominent data analysis tools widely used across various industries:

1) Microsoft Excel:

  • Description: A spreadsheet software that offers basic to advanced data analysis features, including pivot tables, data visualization tools, and statistical functions.
  • Applications: Data cleaning, basic statistical analysis, visualization, and reporting.

2) R Programming Language:

  • Description: An open-source programming language specifically designed for statistical computing and data visualization.
  • Applications: Advanced statistical analysis, data manipulation, visualization, and machine learning.

3) Python (with Libraries like Pandas, NumPy, Matplotlib, and Seaborn):

  • Description: A versatile programming language with libraries that support data manipulation, analysis, and visualization.
  • Applications: Data cleaning, statistical analysis, machine learning, and data visualization.

4) SPSS (Statistical Package for the Social Sciences):

  • Description: A comprehensive statistical software suite used for data analysis, data mining, and predictive analytics.
  • Applications: Descriptive statistics, hypothesis testing, regression analysis, and advanced analytics.

5) SAS (Statistical Analysis System):

  • Description: A software suite used for advanced analytics, multivariate analysis, and predictive modeling.
  • Applications: Data management, statistical analysis, predictive modeling, and business intelligence.

6) Tableau:

  • Description: A data visualization tool that allows users to create interactive and shareable dashboards and reports.
  • Applications: Data visualization , business intelligence , and interactive dashboard creation.

7) Power BI:

  • Description: A business analytics tool developed by Microsoft that provides interactive visualizations and business intelligence capabilities.
  • Applications: Data visualization, business intelligence, reporting, and dashboard creation.

8) SQL (Structured Query Language) Databases (e.g., MySQL, PostgreSQL, Microsoft SQL Server):

  • Description: Database management systems that support data storage, retrieval, and manipulation using SQL queries.
  • Applications: Data retrieval, data cleaning, data transformation, and database management.

9) Apache Spark:

  • Description: A fast and general-purpose distributed computing system designed for big data processing and analytics.
  • Applications: Big data processing, machine learning, data streaming, and real-time analytics.

10) IBM SPSS Modeler:

  • Description: A data mining software application used for building predictive models and conducting advanced analytics.
  • Applications: Predictive modeling, data mining, statistical analysis, and decision optimization.

These tools serve various purposes and cater to different data analysis needs, from basic statistical analysis and data visualization to advanced analytics, machine learning, and big data processing. The choice of a specific tool often depends on the nature of the data, the complexity of the analysis, and the specific requirements of the project or organization.

Also Read: How to Analyze Survey Data: Methods & Examples

Importance of Data Analysis in Research

The importance of data analysis in research cannot be overstated; it serves as the backbone of any scientific investigation or study. Here are several key reasons why data analysis is crucial in the research process:

  • Data analysis helps ensure that the results obtained are valid and reliable. By systematically examining the data, researchers can identify any inconsistencies or anomalies that may affect the credibility of the findings.
  • Effective data analysis provides researchers with the necessary information to make informed decisions. By interpreting the collected data, researchers can draw conclusions, make predictions, or formulate recommendations based on evidence rather than intuition or guesswork.
  • Data analysis allows researchers to identify patterns, trends, and relationships within the data. This can lead to a deeper understanding of the research topic, enabling researchers to uncover insights that may not be immediately apparent.
  • In empirical research, data analysis plays a critical role in testing hypotheses. Researchers collect data to either support or refute their hypotheses, and data analysis provides the tools and techniques to evaluate these hypotheses rigorously.
  • Transparent and well-executed data analysis enhances the credibility of research findings. By clearly documenting the data analysis methods and procedures, researchers allow others to replicate the study, thereby contributing to the reproducibility of research findings.
  • In fields such as business or healthcare, data analysis helps organizations allocate resources more efficiently. By analyzing data on consumer behavior, market trends, or patient outcomes, organizations can make strategic decisions about resource allocation, budgeting, and planning.
  • In public policy and social sciences, data analysis is instrumental in developing and evaluating policies and interventions. By analyzing data on social, economic, or environmental factors, policymakers can assess the effectiveness of existing policies and inform the development of new ones.
  • Data analysis allows for continuous improvement in research methods and practices. By analyzing past research projects, identifying areas for improvement, and implementing changes based on data-driven insights, researchers can refine their approaches and enhance the quality of future research endeavors.

However, it is important to remember that mastering these techniques requires practice and continuous learning. That’s why we highly recommend the Data Analytics Course by Physics Wallah . Not only does it cover all the fundamentals of data analysis, but it also provides hands-on experience with various tools such as Excel, Python, and Tableau. Plus, if you use the “ READER ” coupon code at checkout, you can get a special discount on the course.

For Latest Tech Related Information, Join Our Official Free Telegram Group : PW Skills Telegram Group

Data Analysis Techniques in Research FAQs

What are the 5 techniques for data analysis.

The five techniques for data analysis include: Descriptive Analysis Diagnostic Analysis Predictive Analysis Prescriptive Analysis Qualitative Analysis

What are techniques of data analysis in research?

Techniques of data analysis in research encompass both qualitative and quantitative methods. These techniques involve processes like summarizing raw data, investigating causes of events, forecasting future outcomes, offering recommendations based on predictions, and examining non-numerical data to understand concepts or experiences.

What are the 3 methods of data analysis?

The three primary methods of data analysis are: Qualitative Analysis Quantitative Analysis Mixed-Methods Analysis

What are the four types of data analysis techniques?

The four types of data analysis techniques are: Descriptive Analysis Diagnostic Analysis Predictive Analysis Prescriptive Analysis

AI and Data Analytics: Tools, Uses, Importance, Salary, and more!

ai and data analytics

AI and Data analytics are gaining a lot of traction in 2024. They both are very important in today’s business…

Analytics & Insights: The Difference Between Data, Analytics and Insights

analytics and insights

Analytics and Insights: With the expansion of technologies in all major related fields, data is used in extensive amounts, containing…

What Is Big Data Analytics? Definition, Benefits, and More

big data analytics

Big data analytics is the process of identifying trends, patterns, and correlations in vast amounts of raw data in order to…

bottom banner

  • Privacy Policy

Research Method

Home » Data Analysis – Process, Methods and Types

Data Analysis – Process, Methods and Types

Table of Contents

Data Analysis

Data Analysis

Definition:

Data analysis refers to the process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, drawing conclusions, and supporting decision-making. It involves applying various statistical and computational techniques to interpret and derive insights from large datasets. The ultimate aim of data analysis is to convert raw data into actionable insights that can inform business decisions, scientific research, and other endeavors.

Data Analysis Process

The following are step-by-step guides to the data analysis process:

Define the Problem

The first step in data analysis is to clearly define the problem or question that needs to be answered. This involves identifying the purpose of the analysis, the data required, and the intended outcome.

Collect the Data

The next step is to collect the relevant data from various sources. This may involve collecting data from surveys, databases, or other sources. It is important to ensure that the data collected is accurate, complete, and relevant to the problem being analyzed.

Clean and Organize the Data

Once the data has been collected, it needs to be cleaned and organized. This involves removing any errors or inconsistencies in the data, filling in missing values, and ensuring that the data is in a format that can be easily analyzed.

Analyze the Data

The next step is to analyze the data using various statistical and analytical techniques. This may involve identifying patterns in the data, conducting statistical tests, or using machine learning algorithms to identify trends and insights.

Interpret the Results

After analyzing the data, the next step is to interpret the results. This involves drawing conclusions based on the analysis and identifying any significant findings or trends.

Communicate the Findings

Once the results have been interpreted, they need to be communicated to stakeholders. This may involve creating reports, visualizations, or presentations to effectively communicate the findings and recommendations.

Take Action

The final step in the data analysis process is to take action based on the findings. This may involve implementing new policies or procedures, making strategic decisions, or taking other actions based on the insights gained from the analysis.

Types of Data Analysis

Types of Data Analysis are as follows:

Descriptive Analysis

This type of analysis involves summarizing and describing the main characteristics of a dataset, such as the mean, median, mode, standard deviation, and range.

Inferential Analysis

This type of analysis involves making inferences about a population based on a sample. Inferential analysis can help determine whether a certain relationship or pattern observed in a sample is likely to be present in the entire population.

Diagnostic Analysis

This type of analysis involves identifying and diagnosing problems or issues within a dataset. Diagnostic analysis can help identify outliers, errors, missing data, or other anomalies in the dataset.

Predictive Analysis

This type of analysis involves using statistical models and algorithms to predict future outcomes or trends based on historical data. Predictive analysis can help businesses and organizations make informed decisions about the future.

Prescriptive Analysis

This type of analysis involves recommending a course of action based on the results of previous analyses. Prescriptive analysis can help organizations make data-driven decisions about how to optimize their operations, products, or services.

Exploratory Analysis

This type of analysis involves exploring the relationships and patterns within a dataset to identify new insights and trends. Exploratory analysis is often used in the early stages of research or data analysis to generate hypotheses and identify areas for further investigation.

Data Analysis Methods

Data Analysis Methods are as follows:

Statistical Analysis

This method involves the use of mathematical models and statistical tools to analyze and interpret data. It includes measures of central tendency, correlation analysis, regression analysis, hypothesis testing, and more.

Machine Learning

This method involves the use of algorithms to identify patterns and relationships in data. It includes supervised and unsupervised learning, classification, clustering, and predictive modeling.

Data Mining

This method involves using statistical and machine learning techniques to extract information and insights from large and complex datasets.

Text Analysis

This method involves using natural language processing (NLP) techniques to analyze and interpret text data. It includes sentiment analysis, topic modeling, and entity recognition.

Network Analysis

This method involves analyzing the relationships and connections between entities in a network, such as social networks or computer networks. It includes social network analysis and graph theory.

Time Series Analysis

This method involves analyzing data collected over time to identify patterns and trends. It includes forecasting, decomposition, and smoothing techniques.

Spatial Analysis

This method involves analyzing geographic data to identify spatial patterns and relationships. It includes spatial statistics, spatial regression, and geospatial data visualization.

Data Visualization

This method involves using graphs, charts, and other visual representations to help communicate the findings of the analysis. It includes scatter plots, bar charts, heat maps, and interactive dashboards.

Qualitative Analysis

This method involves analyzing non-numeric data such as interviews, observations, and open-ended survey responses. It includes thematic analysis, content analysis, and grounded theory.

Multi-criteria Decision Analysis

This method involves analyzing multiple criteria and objectives to support decision-making. It includes techniques such as the analytical hierarchy process, TOPSIS, and ELECTRE.

Data Analysis Tools

There are various data analysis tools available that can help with different aspects of data analysis. Below is a list of some commonly used data analysis tools:

  • Microsoft Excel: A widely used spreadsheet program that allows for data organization, analysis, and visualization.
  • SQL : A programming language used to manage and manipulate relational databases.
  • R : An open-source programming language and software environment for statistical computing and graphics.
  • Python : A general-purpose programming language that is widely used in data analysis and machine learning.
  • Tableau : A data visualization software that allows for interactive and dynamic visualizations of data.
  • SAS : A statistical analysis software used for data management, analysis, and reporting.
  • SPSS : A statistical analysis software used for data analysis, reporting, and modeling.
  • Matlab : A numerical computing software that is widely used in scientific research and engineering.
  • RapidMiner : A data science platform that offers a wide range of data analysis and machine learning tools.

Applications of Data Analysis

Data analysis has numerous applications across various fields. Below are some examples of how data analysis is used in different fields:

  • Business : Data analysis is used to gain insights into customer behavior, market trends, and financial performance. This includes customer segmentation, sales forecasting, and market research.
  • Healthcare : Data analysis is used to identify patterns and trends in patient data, improve patient outcomes, and optimize healthcare operations. This includes clinical decision support, disease surveillance, and healthcare cost analysis.
  • Education : Data analysis is used to measure student performance, evaluate teaching effectiveness, and improve educational programs. This includes assessment analytics, learning analytics, and program evaluation.
  • Finance : Data analysis is used to monitor and evaluate financial performance, identify risks, and make investment decisions. This includes risk management, portfolio optimization, and fraud detection.
  • Government : Data analysis is used to inform policy-making, improve public services, and enhance public safety. This includes crime analysis, disaster response planning, and social welfare program evaluation.
  • Sports : Data analysis is used to gain insights into athlete performance, improve team strategy, and enhance fan engagement. This includes player evaluation, scouting analysis, and game strategy optimization.
  • Marketing : Data analysis is used to measure the effectiveness of marketing campaigns, understand customer behavior, and develop targeted marketing strategies. This includes customer segmentation, marketing attribution analysis, and social media analytics.
  • Environmental science : Data analysis is used to monitor and evaluate environmental conditions, assess the impact of human activities on the environment, and develop environmental policies. This includes climate modeling, ecological forecasting, and pollution monitoring.

When to Use Data Analysis

Data analysis is useful when you need to extract meaningful insights and information from large and complex datasets. It is a crucial step in the decision-making process, as it helps you understand the underlying patterns and relationships within the data, and identify potential areas for improvement or opportunities for growth.

Here are some specific scenarios where data analysis can be particularly helpful:

  • Problem-solving : When you encounter a problem or challenge, data analysis can help you identify the root cause and develop effective solutions.
  • Optimization : Data analysis can help you optimize processes, products, or services to increase efficiency, reduce costs, and improve overall performance.
  • Prediction: Data analysis can help you make predictions about future trends or outcomes, which can inform strategic planning and decision-making.
  • Performance evaluation : Data analysis can help you evaluate the performance of a process, product, or service to identify areas for improvement and potential opportunities for growth.
  • Risk assessment : Data analysis can help you assess and mitigate risks, whether it is financial, operational, or related to safety.
  • Market research : Data analysis can help you understand customer behavior and preferences, identify market trends, and develop effective marketing strategies.
  • Quality control: Data analysis can help you ensure product quality and customer satisfaction by identifying and addressing quality issues.

Purpose of Data Analysis

The primary purposes of data analysis can be summarized as follows:

  • To gain insights: Data analysis allows you to identify patterns and trends in data, which can provide valuable insights into the underlying factors that influence a particular phenomenon or process.
  • To inform decision-making: Data analysis can help you make informed decisions based on the information that is available. By analyzing data, you can identify potential risks, opportunities, and solutions to problems.
  • To improve performance: Data analysis can help you optimize processes, products, or services by identifying areas for improvement and potential opportunities for growth.
  • To measure progress: Data analysis can help you measure progress towards a specific goal or objective, allowing you to track performance over time and adjust your strategies accordingly.
  • To identify new opportunities: Data analysis can help you identify new opportunities for growth and innovation by identifying patterns and trends that may not have been visible before.

Examples of Data Analysis

Some Examples of Data Analysis are as follows:

  • Social Media Monitoring: Companies use data analysis to monitor social media activity in real-time to understand their brand reputation, identify potential customer issues, and track competitors. By analyzing social media data, businesses can make informed decisions on product development, marketing strategies, and customer service.
  • Financial Trading: Financial traders use data analysis to make real-time decisions about buying and selling stocks, bonds, and other financial instruments. By analyzing real-time market data, traders can identify trends and patterns that help them make informed investment decisions.
  • Traffic Monitoring : Cities use data analysis to monitor traffic patterns and make real-time decisions about traffic management. By analyzing data from traffic cameras, sensors, and other sources, cities can identify congestion hotspots and make changes to improve traffic flow.
  • Healthcare Monitoring: Healthcare providers use data analysis to monitor patient health in real-time. By analyzing data from wearable devices, electronic health records, and other sources, healthcare providers can identify potential health issues and provide timely interventions.
  • Online Advertising: Online advertisers use data analysis to make real-time decisions about advertising campaigns. By analyzing data on user behavior and ad performance, advertisers can make adjustments to their campaigns to improve their effectiveness.
  • Sports Analysis : Sports teams use data analysis to make real-time decisions about strategy and player performance. By analyzing data on player movement, ball position, and other variables, coaches can make informed decisions about substitutions, game strategy, and training regimens.
  • Energy Management : Energy companies use data analysis to monitor energy consumption in real-time. By analyzing data on energy usage patterns, companies can identify opportunities to reduce energy consumption and improve efficiency.

Characteristics of Data Analysis

Characteristics of Data Analysis are as follows:

  • Objective : Data analysis should be objective and based on empirical evidence, rather than subjective assumptions or opinions.
  • Systematic : Data analysis should follow a systematic approach, using established methods and procedures for collecting, cleaning, and analyzing data.
  • Accurate : Data analysis should produce accurate results, free from errors and bias. Data should be validated and verified to ensure its quality.
  • Relevant : Data analysis should be relevant to the research question or problem being addressed. It should focus on the data that is most useful for answering the research question or solving the problem.
  • Comprehensive : Data analysis should be comprehensive and consider all relevant factors that may affect the research question or problem.
  • Timely : Data analysis should be conducted in a timely manner, so that the results are available when they are needed.
  • Reproducible : Data analysis should be reproducible, meaning that other researchers should be able to replicate the analysis using the same data and methods.
  • Communicable : Data analysis should be communicated clearly and effectively to stakeholders and other interested parties. The results should be presented in a way that is understandable and useful for decision-making.

Advantages of Data Analysis

Advantages of Data Analysis are as follows:

  • Better decision-making: Data analysis helps in making informed decisions based on facts and evidence, rather than intuition or guesswork.
  • Improved efficiency: Data analysis can identify inefficiencies and bottlenecks in business processes, allowing organizations to optimize their operations and reduce costs.
  • Increased accuracy: Data analysis helps to reduce errors and bias, providing more accurate and reliable information.
  • Better customer service: Data analysis can help organizations understand their customers better, allowing them to provide better customer service and improve customer satisfaction.
  • Competitive advantage: Data analysis can provide organizations with insights into their competitors, allowing them to identify areas where they can gain a competitive advantage.
  • Identification of trends and patterns : Data analysis can identify trends and patterns in data that may not be immediately apparent, helping organizations to make predictions and plan for the future.
  • Improved risk management : Data analysis can help organizations identify potential risks and take proactive steps to mitigate them.
  • Innovation: Data analysis can inspire innovation and new ideas by revealing new opportunities or previously unknown correlations in data.

Limitations of Data Analysis

  • Data quality: The quality of data can impact the accuracy and reliability of analysis results. If data is incomplete, inconsistent, or outdated, the analysis may not provide meaningful insights.
  • Limited scope: Data analysis is limited by the scope of the data available. If data is incomplete or does not capture all relevant factors, the analysis may not provide a complete picture.
  • Human error : Data analysis is often conducted by humans, and errors can occur in data collection, cleaning, and analysis.
  • Cost : Data analysis can be expensive, requiring specialized tools, software, and expertise.
  • Time-consuming : Data analysis can be time-consuming, especially when working with large datasets or conducting complex analyses.
  • Overreliance on data: Data analysis should be complemented with human intuition and expertise. Overreliance on data can lead to a lack of creativity and innovation.
  • Privacy concerns: Data analysis can raise privacy concerns if personal or sensitive information is used without proper consent or security measures.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Discourse Analysis

Discourse Analysis – Methods, Types and Examples

Tables in Research Paper

Tables in Research Paper – Types, Creating Guide...

APA Table of Contents

APA Table of Contents – Format and Example

Dissertation vs Thesis

Dissertation vs Thesis – Key Differences

Assignment

Assignment – Types, Examples and Writing Guide

Research Topic

Research Topics – Ideas and Examples

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

The Beginner's Guide to Statistical Analysis | 5 Steps & Examples

Statistical analysis means investigating trends, patterns, and relationships using quantitative data . It is an important research tool used by scientists, governments, businesses, and other organizations.

To draw valid conclusions, statistical analysis requires careful planning from the very start of the research process . You need to specify your hypotheses and make decisions about your research design, sample size, and sampling procedure.

After collecting data from your sample, you can organize and summarize the data using descriptive statistics . Then, you can use inferential statistics to formally test hypotheses and make estimates about the population. Finally, you can interpret and generalize your findings.

This article is a practical introduction to statistical analysis for students and researchers. We’ll walk you through the steps using two research examples. The first investigates a potential cause-and-effect relationship, while the second investigates a potential correlation between variables.

Table of contents

Step 1: write your hypotheses and plan your research design, step 2: collect data from a sample, step 3: summarize your data with descriptive statistics, step 4: test hypotheses or make estimates with inferential statistics, step 5: interpret your results, other interesting articles.

To collect valid data for statistical analysis, you first need to specify your hypotheses and plan out your research design.

Writing statistical hypotheses

The goal of research is often to investigate a relationship between variables within a population . You start with a prediction, and use statistical analysis to test that prediction.

A statistical hypothesis is a formal way of writing a prediction about a population. Every research prediction is rephrased into null and alternative hypotheses that can be tested using sample data.

While the null hypothesis always predicts no effect or no relationship between variables, the alternative hypothesis states your research prediction of an effect or relationship.

  • Null hypothesis: A 5-minute meditation exercise will have no effect on math test scores in teenagers.
  • Alternative hypothesis: A 5-minute meditation exercise will improve math test scores in teenagers.
  • Null hypothesis: Parental income and GPA have no relationship with each other in college students.
  • Alternative hypothesis: Parental income and GPA are positively correlated in college students.

Planning your research design

A research design is your overall strategy for data collection and analysis. It determines the statistical tests you can use to test your hypothesis later on.

First, decide whether your research will use a descriptive, correlational, or experimental design. Experiments directly influence variables, whereas descriptive and correlational studies only measure variables.

  • In an experimental design , you can assess a cause-and-effect relationship (e.g., the effect of meditation on test scores) using statistical tests of comparison or regression.
  • In a correlational design , you can explore relationships between variables (e.g., parental income and GPA) without any assumption of causality using correlation coefficients and significance tests.
  • In a descriptive design , you can study the characteristics of a population or phenomenon (e.g., the prevalence of anxiety in U.S. college students) using statistical tests to draw inferences from sample data.

Your research design also concerns whether you’ll compare participants at the group level or individual level, or both.

  • In a between-subjects design , you compare the group-level outcomes of participants who have been exposed to different treatments (e.g., those who performed a meditation exercise vs those who didn’t).
  • In a within-subjects design , you compare repeated measures from participants who have participated in all treatments of a study (e.g., scores from before and after performing a meditation exercise).
  • In a mixed (factorial) design , one variable is altered between subjects and another is altered within subjects (e.g., pretest and posttest scores from participants who either did or didn’t do a meditation exercise).
  • Experimental
  • Correlational

First, you’ll take baseline test scores from participants. Then, your participants will undergo a 5-minute meditation exercise. Finally, you’ll record participants’ scores from a second math test.

In this experiment, the independent variable is the 5-minute meditation exercise, and the dependent variable is the math test score from before and after the intervention. Example: Correlational research design In a correlational study, you test whether there is a relationship between parental income and GPA in graduating college students. To collect your data, you will ask participants to fill in a survey and self-report their parents’ incomes and their own GPA.

Measuring variables

When planning a research design, you should operationalize your variables and decide exactly how you will measure them.

For statistical analysis, it’s important to consider the level of measurement of your variables, which tells you what kind of data they contain:

  • Categorical data represents groupings. These may be nominal (e.g., gender) or ordinal (e.g. level of language ability).
  • Quantitative data represents amounts. These may be on an interval scale (e.g. test score) or a ratio scale (e.g. age).

Many variables can be measured at different levels of precision. For example, age data can be quantitative (8 years old) or categorical (young). If a variable is coded numerically (e.g., level of agreement from 1–5), it doesn’t automatically mean that it’s quantitative instead of categorical.

Identifying the measurement level is important for choosing appropriate statistics and hypothesis tests. For example, you can calculate a mean score with quantitative data, but not with categorical data.

In a research study, along with measures of your variables of interest, you’ll often collect data on relevant participant characteristics.

Variable Type of data
Age Quantitative (ratio)
Gender Categorical (nominal)
Race or ethnicity Categorical (nominal)
Baseline test scores Quantitative (interval)
Final test scores Quantitative (interval)
Parental income Quantitative (ratio)
GPA Quantitative (interval)

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Population vs sample

In most cases, it’s too difficult or expensive to collect data from every member of the population you’re interested in studying. Instead, you’ll collect data from a sample.

Statistical analysis allows you to apply your findings beyond your own sample as long as you use appropriate sampling procedures . You should aim for a sample that is representative of the population.

Sampling for statistical analysis

There are two main approaches to selecting a sample.

  • Probability sampling: every member of the population has a chance of being selected for the study through random selection.
  • Non-probability sampling: some members of the population are more likely than others to be selected for the study because of criteria such as convenience or voluntary self-selection.

In theory, for highly generalizable findings, you should use a probability sampling method. Random selection reduces several types of research bias , like sampling bias , and ensures that data from your sample is actually typical of the population. Parametric tests can be used to make strong statistical inferences when data are collected using probability sampling.

But in practice, it’s rarely possible to gather the ideal sample. While non-probability samples are more likely to at risk for biases like self-selection bias , they are much easier to recruit and collect data from. Non-parametric tests are more appropriate for non-probability samples, but they result in weaker inferences about the population.

If you want to use parametric tests for non-probability samples, you have to make the case that:

  • your sample is representative of the population you’re generalizing your findings to.
  • your sample lacks systematic bias.

Keep in mind that external validity means that you can only generalize your conclusions to others who share the characteristics of your sample. For instance, results from Western, Educated, Industrialized, Rich and Democratic samples (e.g., college students in the US) aren’t automatically applicable to all non-WEIRD populations.

If you apply parametric tests to data from non-probability samples, be sure to elaborate on the limitations of how far your results can be generalized in your discussion section .

Create an appropriate sampling procedure

Based on the resources available for your research, decide on how you’ll recruit participants.

  • Will you have resources to advertise your study widely, including outside of your university setting?
  • Will you have the means to recruit a diverse sample that represents a broad population?
  • Do you have time to contact and follow up with members of hard-to-reach groups?

Your participants are self-selected by their schools. Although you’re using a non-probability sample, you aim for a diverse and representative sample. Example: Sampling (correlational study) Your main population of interest is male college students in the US. Using social media advertising, you recruit senior-year male college students from a smaller subpopulation: seven universities in the Boston area.

Calculate sufficient sample size

Before recruiting participants, decide on your sample size either by looking at other studies in your field or using statistics. A sample that’s too small may be unrepresentative of the sample, while a sample that’s too large will be more costly than necessary.

There are many sample size calculators online. Different formulas are used depending on whether you have subgroups or how rigorous your study should be (e.g., in clinical research). As a rule of thumb, a minimum of 30 units or more per subgroup is necessary.

To use these calculators, you have to understand and input these key components:

  • Significance level (alpha): the risk of rejecting a true null hypothesis that you are willing to take, usually set at 5%.
  • Statistical power : the probability of your study detecting an effect of a certain size if there is one, usually 80% or higher.
  • Expected effect size : a standardized indication of how large the expected result of your study will be, usually based on other similar studies.
  • Population standard deviation: an estimate of the population parameter based on a previous study or a pilot study of your own.

Once you’ve collected all of your data, you can inspect them and calculate descriptive statistics that summarize them.

Inspect your data

There are various ways to inspect your data, including the following:

  • Organizing data from each variable in frequency distribution tables .
  • Displaying data from a key variable in a bar chart to view the distribution of responses.
  • Visualizing the relationship between two variables using a scatter plot .

By visualizing your data in tables and graphs, you can assess whether your data follow a skewed or normal distribution and whether there are any outliers or missing data.

A normal distribution means that your data are symmetrically distributed around a center where most values lie, with the values tapering off at the tail ends.

Mean, median, mode, and standard deviation in a normal distribution

In contrast, a skewed distribution is asymmetric and has more values on one end than the other. The shape of the distribution is important to keep in mind because only some descriptive statistics should be used with skewed distributions.

Extreme outliers can also produce misleading statistics, so you may need a systematic approach to dealing with these values.

Calculate measures of central tendency

Measures of central tendency describe where most of the values in a data set lie. Three main measures of central tendency are often reported:

  • Mode : the most popular response or value in the data set.
  • Median : the value in the exact middle of the data set when ordered from low to high.
  • Mean : the sum of all values divided by the number of values.

However, depending on the shape of the distribution and level of measurement, only one or two of these measures may be appropriate. For example, many demographic characteristics can only be described using the mode or proportions, while a variable like reaction time may not have a mode at all.

Calculate measures of variability

Measures of variability tell you how spread out the values in a data set are. Four main measures of variability are often reported:

  • Range : the highest value minus the lowest value of the data set.
  • Interquartile range : the range of the middle half of the data set.
  • Standard deviation : the average distance between each value in your data set and the mean.
  • Variance : the square of the standard deviation.

Once again, the shape of the distribution and level of measurement should guide your choice of variability statistics. The interquartile range is the best measure for skewed distributions, while standard deviation and variance provide the best information for normal distributions.

Using your table, you should check whether the units of the descriptive statistics are comparable for pretest and posttest scores. For example, are the variance levels similar across the groups? Are there any extreme values? If there are, you may need to identify and remove extreme outliers in your data set or transform your data before performing a statistical test.

Pretest scores Posttest scores
Mean 68.44 75.25
Standard deviation 9.43 9.88
Variance 88.96 97.96
Range 36.25 45.12
30

From this table, we can see that the mean score increased after the meditation exercise, and the variances of the two scores are comparable. Next, we can perform a statistical test to find out if this improvement in test scores is statistically significant in the population. Example: Descriptive statistics (correlational study) After collecting data from 653 students, you tabulate descriptive statistics for annual parental income and GPA.

It’s important to check whether you have a broad range of data points. If you don’t, your data may be skewed towards some groups more than others (e.g., high academic achievers), and only limited inferences can be made about a relationship.

Parental income (USD) GPA
Mean 62,100 3.12
Standard deviation 15,000 0.45
Variance 225,000,000 0.16
Range 8,000–378,000 2.64–4.00
653

A number that describes a sample is called a statistic , while a number describing a population is called a parameter . Using inferential statistics , you can make conclusions about population parameters based on sample statistics.

Researchers often use two main methods (simultaneously) to make inferences in statistics.

  • Estimation: calculating population parameters based on sample statistics.
  • Hypothesis testing: a formal process for testing research predictions about the population using samples.

You can make two types of estimates of population parameters from sample statistics:

  • A point estimate : a value that represents your best guess of the exact parameter.
  • An interval estimate : a range of values that represent your best guess of where the parameter lies.

If your aim is to infer and report population characteristics from sample data, it’s best to use both point and interval estimates in your paper.

You can consider a sample statistic a point estimate for the population parameter when you have a representative sample (e.g., in a wide public opinion poll, the proportion of a sample that supports the current government is taken as the population proportion of government supporters).

There’s always error involved in estimation, so you should also provide a confidence interval as an interval estimate to show the variability around a point estimate.

A confidence interval uses the standard error and the z score from the standard normal distribution to convey where you’d generally expect to find the population parameter most of the time.

Hypothesis testing

Using data from a sample, you can test hypotheses about relationships between variables in the population. Hypothesis testing starts with the assumption that the null hypothesis is true in the population, and you use statistical tests to assess whether the null hypothesis can be rejected or not.

Statistical tests determine where your sample data would lie on an expected distribution of sample data if the null hypothesis were true. These tests give two main outputs:

  • A test statistic tells you how much your data differs from the null hypothesis of the test.
  • A p value tells you the likelihood of obtaining your results if the null hypothesis is actually true in the population.

Statistical tests come in three main varieties:

  • Comparison tests assess group differences in outcomes.
  • Regression tests assess cause-and-effect relationships between variables.
  • Correlation tests assess relationships between variables without assuming causation.

Your choice of statistical test depends on your research questions, research design, sampling method, and data characteristics.

Parametric tests

Parametric tests make powerful inferences about the population based on sample data. But to use them, some assumptions must be met, and only some types of variables can be used. If your data violate these assumptions, you can perform appropriate data transformations or use alternative non-parametric tests instead.

A regression models the extent to which changes in a predictor variable results in changes in outcome variable(s).

  • A simple linear regression includes one predictor variable and one outcome variable.
  • A multiple linear regression includes two or more predictor variables and one outcome variable.

Comparison tests usually compare the means of groups. These may be the means of different groups within a sample (e.g., a treatment and control group), the means of one sample group taken at different times (e.g., pretest and posttest scores), or a sample mean and a population mean.

  • A t test is for exactly 1 or 2 groups when the sample is small (30 or less).
  • A z test is for exactly 1 or 2 groups when the sample is large.
  • An ANOVA is for 3 or more groups.

The z and t tests have subtypes based on the number and types of samples and the hypotheses:

  • If you have only one sample that you want to compare to a population mean, use a one-sample test .
  • If you have paired measurements (within-subjects design), use a dependent (paired) samples test .
  • If you have completely separate measurements from two unmatched groups (between-subjects design), use an independent (unpaired) samples test .
  • If you expect a difference between groups in a specific direction, use a one-tailed test .
  • If you don’t have any expectations for the direction of a difference between groups, use a two-tailed test .

The only parametric correlation test is Pearson’s r . The correlation coefficient ( r ) tells you the strength of a linear relationship between two quantitative variables.

However, to test whether the correlation in the sample is strong enough to be important in the population, you also need to perform a significance test of the correlation coefficient, usually a t test, to obtain a p value. This test uses your sample size to calculate how much the correlation coefficient differs from zero in the population.

You use a dependent-samples, one-tailed t test to assess whether the meditation exercise significantly improved math test scores. The test gives you:

  • a t value (test statistic) of 3.00
  • a p value of 0.0028

Although Pearson’s r is a test statistic, it doesn’t tell you anything about how significant the correlation is in the population. You also need to test whether this sample correlation coefficient is large enough to demonstrate a correlation in the population.

A t test can also determine how significantly a correlation coefficient differs from zero based on sample size. Since you expect a positive correlation between parental income and GPA, you use a one-sample, one-tailed t test. The t test gives you:

  • a t value of 3.08
  • a p value of 0.001

Prevent plagiarism. Run a free check.

The final step of statistical analysis is interpreting your results.

Statistical significance

In hypothesis testing, statistical significance is the main criterion for forming conclusions. You compare your p value to a set significance level (usually 0.05) to decide whether your results are statistically significant or non-significant.

Statistically significant results are considered unlikely to have arisen solely due to chance. There is only a very low chance of such a result occurring if the null hypothesis is true in the population.

This means that you believe the meditation intervention, rather than random factors, directly caused the increase in test scores. Example: Interpret your results (correlational study) You compare your p value of 0.001 to your significance threshold of 0.05. With a p value under this threshold, you can reject the null hypothesis. This indicates a statistically significant correlation between parental income and GPA in male college students.

Note that correlation doesn’t always mean causation, because there are often many underlying factors contributing to a complex variable like GPA. Even if one variable is related to another, this may be because of a third variable influencing both of them, or indirect links between the two variables.

Effect size

A statistically significant result doesn’t necessarily mean that there are important real life applications or clinical outcomes for a finding.

In contrast, the effect size indicates the practical significance of your results. It’s important to report effect sizes along with your inferential statistics for a complete picture of your results. You should also report interval estimates of effect sizes if you’re writing an APA style paper .

With a Cohen’s d of 0.72, there’s medium to high practical significance to your finding that the meditation exercise improved test scores. Example: Effect size (correlational study) To determine the effect size of the correlation coefficient, you compare your Pearson’s r value to Cohen’s effect size criteria.

Decision errors

Type I and Type II errors are mistakes made in research conclusions. A Type I error means rejecting the null hypothesis when it’s actually true, while a Type II error means failing to reject the null hypothesis when it’s false.

You can aim to minimize the risk of these errors by selecting an optimal significance level and ensuring high power . However, there’s a trade-off between the two errors, so a fine balance is necessary.

Frequentist versus Bayesian statistics

Traditionally, frequentist statistics emphasizes null hypothesis significance testing and always starts with the assumption of a true null hypothesis.

However, Bayesian statistics has grown in popularity as an alternative approach in the last few decades. In this approach, you use previous research to continually update your hypotheses based on your expectations and observations.

Bayes factor compares the relative strength of evidence for the null versus the alternative hypothesis rather than making a conclusion about rejecting the null hypothesis or not.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval

Methodology

  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hostile attribution bias
  • Affect heuristic

Is this article helpful?

Other students also liked.

  • Descriptive Statistics | Definitions, Types, Examples
  • Inferential Statistics | An Easy Introduction & Examples
  • Choosing the Right Statistical Test | Types & Examples

More interesting articles

  • Akaike Information Criterion | When & How to Use It (Example)
  • An Easy Introduction to Statistical Significance (With Examples)
  • An Introduction to t Tests | Definitions, Formula and Examples
  • ANOVA in R | A Complete Step-by-Step Guide with Examples
  • Central Limit Theorem | Formula, Definition & Examples
  • Central Tendency | Understanding the Mean, Median & Mode
  • Chi-Square (Χ²) Distributions | Definition & Examples
  • Chi-Square (Χ²) Table | Examples & Downloadable Table
  • Chi-Square (Χ²) Tests | Types, Formula & Examples
  • Chi-Square Goodness of Fit Test | Formula, Guide & Examples
  • Chi-Square Test of Independence | Formula, Guide & Examples
  • Coefficient of Determination (R²) | Calculation & Interpretation
  • Correlation Coefficient | Types, Formulas & Examples
  • Frequency Distribution | Tables, Types & Examples
  • How to Calculate Standard Deviation (Guide) | Calculator & Examples
  • How to Calculate Variance | Calculator, Analysis & Examples
  • How to Find Degrees of Freedom | Definition & Formula
  • How to Find Interquartile Range (IQR) | Calculator & Examples
  • How to Find Outliers | 4 Ways with Examples & Explanation
  • How to Find the Geometric Mean | Calculator & Formula
  • How to Find the Mean | Definition, Examples & Calculator
  • How to Find the Median | Definition, Examples & Calculator
  • How to Find the Mode | Definition, Examples & Calculator
  • How to Find the Range of a Data Set | Calculator & Formula
  • Hypothesis Testing | A Step-by-Step Guide with Easy Examples
  • Interval Data and How to Analyze It | Definitions & Examples
  • Levels of Measurement | Nominal, Ordinal, Interval and Ratio
  • Linear Regression in R | A Step-by-Step Guide & Examples
  • Missing Data | Types, Explanation, & Imputation
  • Multiple Linear Regression | A Quick Guide (Examples)
  • Nominal Data | Definition, Examples, Data Collection & Analysis
  • Normal Distribution | Examples, Formulas, & Uses
  • Null and Alternative Hypotheses | Definitions & Examples
  • One-way ANOVA | When and How to Use It (With Examples)
  • Ordinal Data | Definition, Examples, Data Collection & Analysis
  • Parameter vs Statistic | Definitions, Differences & Examples
  • Pearson Correlation Coefficient (r) | Guide & Examples
  • Poisson Distributions | Definition, Formula & Examples
  • Probability Distribution | Formula, Types, & Examples
  • Quartiles & Quantiles | Calculation, Definition & Interpretation
  • Ratio Scales | Definition, Examples, & Data Analysis
  • Simple Linear Regression | An Easy Introduction & Examples
  • Skewness | Definition, Examples & Formula
  • Statistical Power and Why It Matters | A Simple Introduction
  • Student's t Table (Free Download) | Guide & Examples
  • T-distribution: What it is and how to use it
  • Test statistics | Definition, Interpretation, and Examples
  • The Standard Normal Distribution | Calculator, Examples & Uses
  • Two-Way ANOVA | Examples & When To Use It
  • Type I & Type II Errors | Differences, Examples, Visualizations
  • Understanding Confidence Intervals | Easy Examples & Formulas
  • Understanding P values | Definition and Examples
  • Variability | Calculating Range, IQR, Variance, Standard Deviation
  • What is Effect Size and Why Does It Matter? (Examples)
  • What Is Kurtosis? | Definition, Examples & Formula
  • What Is Standard Error? | How to Calculate (Guide with Examples)

What is your plagiarism score?

National Academies Press: OpenBook

Effective Experiment Design and Data Analysis in Transportation Research (2012)

Chapter: chapter 3 - examples of effective experiment design and data analysis in transportation research.

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

10 Examples of Effective Experiment Design and Data Analysis in Transportation Research About this Chapter This chapter provides a wide variety of examples of research questions. The examples demon- strate varying levels of detail with regard to experiment designs and the statistical analyses required. The number and types of examples were selected after consulting with many practitioners. The attempt was made to provide a couple of detailed examples in each of several areas of transporta- tion practice. For each type of problem or analysis, some comments also appear about research topics in other areas that might be addressed using the same approach. Questions that were briefly introduced in Chapter 2 are addressed in considerably more depth in the context of these examples. All the examples are organized and presented using the outline below. Where applicable, ref- erences to the two-volume primer produced under NCHRP Project 20-45 have been provided to encourage the reader to obtain more detail about calculation techniques and more technical discussion of issues. Basic Outline for Examples The numbered outline below is the model for the structure of all of the examples that follow. 1. Research Question/Problem Statement: A simple statement of the research question is given. For example, in the maintenance category, does crack sealant A perform better than crack sealant B? 2. Identification and Description of Variables: The dependent and independent variables are identified and described. The latter includes an indication of whether, for example, the variables are discrete or continuous. 3. Data Collection: A hypothetical scenario is presented to describe how, where, and when data should be collected. As appropriate, reference is made to conventions or requirements for some types of data (e.g., if delay times at an intersection are being calculated before and after some treatment, the data collected need to be consistent with the requirements in the Highway Capacity Manual). Typical problems are addressed, such as sample size, the need for control groups, and so forth. 4. Specification of Analysis Technique and Data Analysis: The links between successfully framing the research question, fully describing the variables that need to be considered, and the specification of the appropriate analysis technique are highlighted in each example. Refer- ences to NCHRP Project 20-45 are provided for additional detail. The appropriate types of statistical test(s) are described for the specific example. 5. Interpreting the Results: In each example, results that can be expected from the analysis are discussed in terms of what they mean from a statistical perspective (e.g., the t-test result from C h a p t e r 3

examples of effective experiment Design and Data analysis in transportation research 11 a comparison of means indicates whether the mean values of two distributions can be con- sidered to be equal with a specified degree of confidence) as well as an operational perspective (e.g., judging whether the difference is large enough to make an operational difference). In each example, the typical results and their limitations are discussed. 6. Conclusion and Discussion: This section recaps how the early steps in the process lead directly to the later ones. Comments are made regarding how changes in the early steps can affect not only the results of the analysis but also the appropriateness of the approach. 7. Applications in Other Areas of Transportation Research: Each example includes a short list of typical applications in other areas of transportation research for which the approach or analysis technique would be appropriate. Techniques Covered in the Examples The determination of what kinds of statistical techniques to include in the examples was made after consulting with a variety of professionals and examining responses to a survey of research- oriented practitioners. The examples are not exhaustive insofar as not every type of statistical analysis is covered. However, the attempt has been made to cover a representative sample of tech- niques that the practitioner is most likely to encounter in undertaking or supervising research- oriented projects. The following techniques are introduced in one or more examples: • Descriptive statistics • Fitting distributions/goodness of fit (used in one example) • Simple one- and two-sample comparison of means • Simple comparisons of multiple means using analysis of variance (ANOVA) • Factorial designs (also ANOVA) • Simple comparisons of means before and after some treatment • Complex before-and-after comparisons involving control groups • Trend analysis • Regression • Logit analysis (used in one example) • Survey design and analysis • Simulation • Non-parametric methods (used in one example) Although the attempt has been made to make the examples as readable as possible, some tech- nical terms may be unfamiliar to some readers. Detailed definitions for most applicable statistical terms are available in the glossary in NCHRP Project 20-45, Volume 2, Appendix A. Most defini- tions used here are consistent with those contained in NCHRP Project 20-45, which contains useful information for everyone from the beginning researcher to the most accomplished statistician. Some variations appear in the notations used in the examples. For example, in statistical analy- sis an alternate hypothesis may be represented by Ha or by H1, and readers will find both notations used in this report. The examples were developed by several authors with differing backgrounds, and latitude was deliberately given to the authors to use the notations with which they are most familiar. The variations have been included purposefully to acquaint readers with the fact that the same concepts (e.g., something as simple as a mean value) may be noted in various ways by different authors or analysts. Finally, the more widely used techniques, such as analysis of variance (ANOVA), are applied in more than one example. Readers interested in ANOVA are encouraged to read all the ANOVA examples as each example presents different aspects of or perspectives on the approach, and computational techniques presented in one example may not be repeated in later examples (although a citation typically is provided).

12 effective experiment Design and Data analysis in transportation research Areas Covered in the Examples Transportation research is very broad, encompassing many fields. Based on consultation with many research-oriented professionals and a survey of practitioners, key areas of research were identified. Although these areas have lots of overlap, explicit examples in the following areas are included: • Construction • Environment • Lab testing and instrumentation • Maintenance • Materials • Pavements • Public transportation • Structures/bridges • Traffic operations • Traffic safety • Transportation planning • Work zones The 21 examples provided on the following pages begin with the most straightforward ana- lytical approaches (i.e., descriptive statistics) and progress to more sophisticated approaches. Table 1 lists the examples along with the area of research and method of analysis for each example. Example 1: Structures/Bridges; Descriptive Statistics Area: Structures/bridges Method of Analysis: Descriptive statistics (exploring and presenting data to describe existing conditions and develop a basis for further analysis) 1. Research Question/Problem Statement: An engineer for a state agency wants to determine the functional and structural condition of a select number of highway bridges located across the state. Data are obtained for 100 bridges scheduled for routine inspection. The data will be used to develop bridge rehabilitation and/or replacement programs. The objective of this analysis is to provide an overview of the bridge conditions, and to present various methods to display the data in a concise and meaningful manner. Question/Issue Use collected data to describe existing conditions and prepare for future analysis. In this case, bridge inspection data from the state are to be studied and summarized. 2. Identification and Description of Variables: Bridge inspection generally entails collection of numerous variables that include location information, traffic data, structural elements’ type and condition, and functional characteristics. In this example, the variables are: bridge condition ratings of the deck, superstructure, and substructure; and overall condition of the bridge. Based on the severity of deterioration and the extent of spread through a bridge component, a condition rating is assigned on a discrete scale from 0 (failed) to 9 (excellent). These ratings (in addition to several other factors) are used in categorization of a bridge in one of three overall conditions: not deficient; structurally deficient; or functionally obsolete.

examples of effective experiment Design and Data analysis in transportation research 13 Example Area Method of Analysis 1 Structures/bridges Descriptive statistics (exploring and presenting data to describe existing conditions) 2 Public transport Descriptive statistics (organizing and presenting data to describe a system or component) 3 Environment Descriptive statistics (organizing and presenting data to explain current conditions) 4 Traffic operations Goodness of fit (chi-square test; determining if observed/collected data fit a certain distribution) 5 Construction Simple comparisons to specified values (t-test to compare the mean value of a small sample to a standard or other requirement) 6 Maintenance Simple two-sample comparison (t-test for paired comparisons; comparing the mean values of two sets of matched data) 7 Materials Simple two-sample comparisons (t-test for paired comparisons and the F-test for comparing variances) 8 Laboratory testing and/or instrumentation Simple ANOVA (comparing the mean values of more than two samples using the F-test) 9 Materials Simple ANOVA (comparing more than two mean values and the F-test for equality of means) 10 Pavements Simple ANOVA (comparing the mean values of more than two samples using the F-test) 11 Pavements Factorial design (an ANOVA approach exploring the effects of varying more than one independent variable) 12 Work zones Simple before-and-after comparisons (exploring the effect of some treatment before it is applied versus after it is applied) 13 Traffic safety Complex before-and-after comparisons using control groups (examining the effect of some treatment or application with consideration of other factors) 14 Work zones Trend analysis (examining, describing, and modeling how something changes over time) 15 Structures/bridges Trend analysis (examining a trend over time) 16 Transportation planning Multiple regression analysis (developing and testing proposed linear models with more than one independent variable) 17 Traffic operations Regression analysis (developing a model to predict the values that a dependent variable can take as a function of one or more independent variables) 18 Transportation planning Logit and related analysis (developing predictive models when the dependent variable is dichotomous) 19 Public transit Survey design and analysis (organizing survey data for statistical analysis) 20 Traffic operations Simulation (using field data to simulate or model operations or outcomes) 21 Traffic safety Non-parametric methods (methods to be used when data do not follow assumed or conventional distributions) Table 1. Examples provided in this report.

14 effective experiment Design and Data analysis in transportation research 3. Data Collection: Data are collected at 100 scheduled locations by bridge inspectors. It is important to note that the bridge condition rating scale is based on subjective categories, and there may be inherent variability among inspectors in their assignment of ratings to bridge components. A sample of data is compiled to document the bridge condition rating of the three primary structural components and the overall condition by location and ownership (Table 2). Notice that the overall condition of a bridge is not necessarily based only on the condition rating of its components (e.g., they cannot just be added). 4. Specification of Analysis Technique and Data Analysis: The two primary variables of inter- est are bridge condition rating and overall condition. The overall condition of the bridge is a categorical variable with three possible values: not deficient; structurally deficient; and functionally obsolete. The frequencies of these values in the given data set are calculated and displayed in the pie chart below. A pie chart provides a visualization of the relative proportions of bridges falling into each category that is often easier to communicate to the reader than a table showing the same information (Figure 1). Another way to look at the overall bridge condition variable is by cross-tabulation of the three condition categories with the two location categories (urban and rural), as shown in Table 3. A cross-tabulation provides the joint distribution of two (or more) variables such that each cell represents the frequency of occurrence of a specific combination of pos- sible values. For example, as seen in Table 3, there are 10 structurally deficient bridges in rural areas, which represent 11.4% of all rural area bridges inspected. The numbers in the parentheses are column percentages and add up to 100%. Table 3 also shows that 88 of the bridges inspected were located in rural areas, whereas 12 were located in urban areas. The mean values of the bridge condition rating variable for deck, superstructure, and sub- structure are shown in Table 4. These have been calculated by taking the sum of all the values and then dividing by the total number of cases (100 in this example). Generally, a condition rating Bridge No. Owner Location Bridge Condition Rating Overall Condition Deck Superstructure Substructure 1 State Rural 8 8 8 ND* 7 Local agency Rural 6 6 6 FO* 39 State Urban 6 6 2 SD* 69 State park Rural 7 5 5 SD 92 City Urban 5 6 6 ND *ND = not deficient; FO: functionally obsolete; SD: structurally deficient. Table 2. Sample bridge inspection data. Structurally Deficient (SD), 13% Functionally Obsolete (FO), 10% Neither SD/FO, 77% Figure 1. Highway bridge conditions.

examples of effective experiment Design and Data analysis in transportation research 15 of 4 or below indicates deficiency in a structural component. For the purpose of comparison, the mean bridge condition rating of the 13 structurally deficient bridges also is provided. Notice that while the rating scale for the bridge conditions is discrete with values ranging from 0 (failure) to 9 (excellent), the average bridge condition variable is continuous. Therefore, an average score of 6.47 would indicate overall condition of all bridges to be between 6 (satisfactory) and 7 (good). The combined bridge condition rating of deck, superstructure, and substructure is not defined; therefore calculating the mean of the three components’ average rating would make no sense. Also, the average bridge condition rating of functionally obsolete bridges is not calculated because other functional characteristics also accounted for this designation. The distributions of the bridge condition ratings for deck, superstructure, and substructure are shown in Figure 2. Based on the cut-off point of 4, approximately 7% of all bridge decks, 2% of all superstructures, and 5% of all substructures are deficient. 5. Interpreting the Results: The results indicate that a majority of bridges (77%) are not struc- turally or functionally deficient. The inspections were carried out on bridges primarily located in rural areas (88 out of 100). The bridge condition variable may also be cross-tabulated with the ownership variable to determine distribution by jurisdiction. The average condition ratings for the three bridge components for all bridges lies between 6 (satisfactory, some minor problems) and 7 (good, no problems noted). 6. Conclusion and Discussion: This example illustrates how to summarize and present quan- titative and qualitative data on bridge conditions. It is important to understand the mea- surement scale of variables in order to interpret the results correctly. Bridge inspection data collected over time may also be analyzed to determine trends in the condition of bridges in a given area. Trend analysis is addressed in Example 15 (structures). 7. Applications in Other Areas of Transportation Research: Descriptive statistics could be used to present data in other areas of transportation research, such as: • Transportation Planning—to assess the distribution of travel times between origin- destination pairs in an urban area. Overall averages could also be calculated. • Traffic Operations—to analyze the average delay per vehicle at a railroad crossing. Rating Category Mean Value Overall average bridge condition rating (deck) 6.20 Overall average bridge condition rating (superstructure) 6.47 Overall average bridge condition rating (substructure) 6.08 Average bridge condition rating of structurally deficient bridges (deck) 4.92 Average bridge condition rating of structurally deficient bridges (superstructure) 5.30 Average bridge condition rating of structurally deficient bridges (substructure) 4.54 Table 4. Bridge condition ratings. Rural Urban Total Structurally deficient 10 (11.4%) 3 (25.0%) 13 Functionally obsolete 6 (6.8%) 4 (33.3%) 10 Not deficient 72 (81.8%) 5 (41.7%) 77 Total 88 (100%) 12 (100%) 100 Table 3. Cross-tabulation of bridge condition by location.

16 effective experiment Design and Data analysis in transportation research • Traffic Operations/Safety—to examine the frequency of turning violations at driveways with various turning restrictions. • Work Zones, Environment—to assess the average energy consumption during various stages of construction. Example 2: Public Transport; Descriptive Statistics Area: Public transport Method of Analysis: Descriptive statistics (organizing and presenting data to describe a system or component) 1. Research Question/Problem Statement: The manager of a transit agency would like to present information to the board of commissioners on changes in revenue that resulted from a change in the fare. The transit system provides three basic types of service: local bus routes, express bus routes, and demand-responsive bus service. There are 15 local bus routes, 10 express routes, and 1 demand-responsive system. 0 5 10 15 20 25 30 35 40 45 9 8 7 6 5 4 3 2 1 0 Condition Ratings Pe rc en ta ge o f S tru ctu re s Deck Superstructure Substructure Figure 2. Bridge condition ratings. Question/Issue Use data to describe some change over time. In this instance, data from 2008 and 2009 are used to describe the change in revenue on each route/part of a transit system when the fare structure was changed from variable (per mile) to fixed fares. 2. Identification and Description of Variables: Revenue data are available for each route on the local and express bus system and the demand-responsive system as a whole for the years 2008 and 2009. 3. Data Collection: Revenue data were collected on each route for both 2008 and 2009. The annual revenue for the demand-responsive system was also collected. These data are shown in Table 5. 4. Specification of Analysis Technique and Data Analysis: The objective of this analysis is to present the impact of changing the fare system in a series of graphs. The presentation is intended to show the impact on each component of the transit system as well as the impact on overall system revenue. The impact of the fare change on the overall revenue is best shown with a bar graph (Figure 3). The variation in the impact across system components can be illustrated in a similar graph (Figure 4). A pie chart also can be used to illustrate the relative impact on each system component (Figure 5).

examples of effective experiment Design and Data analysis in transportation research 17 Bus Route 2008 Revenue 2009 Revenue Local Route 1 $350,500 $365,700 Local Route 2 $263,000 $271,500 Local Route 3 $450,800 $460,700 Local Route 4 $294,300 $306,400 Local Route 5 $173,900 $184,600 Local Route 6 $367,800 $375,100 Local Route 7 $415,800 $430,300 Local Route 8 $145,600 $149,100 Local Route 9 $248,200 $260,800 Local Route 10 $310,400 $318,300 Local Route 11 $444,300 $459,200 Local Route 12 $208,400 $205,600 Local Route 13 $407,600 $412,400 Local Route 14 $161,500 $169,300 Local Route 15 $325,100 $340,200 Express Route 1 $85,400 $83,600 Express Route 2 $110,300 $109,200 Express Route 3 $65,800 $66,200 Express Route 4 $125,300 $127,600 Express Route 5 $90,800 $90,400 Express Route 6 $125,800 $123,400 Express Route 7 $87,200 $86,900 Express Route 8 $68.300 $67,200 Express Route 9 $110,100 $112,300 Express Route 10 $73,200 $72,100 Demand-Responsive System $510,100 $521,300 Table 5. Revenue by route or type of service and year. 6.02 6.17 0 1 2 3 4 5 6 7 8 2008 2009 Total System Revenue Re ve nu e (M illi on $ ) Figure 3. Impact of fare change on overall revenue.

18 effective experiment Design and Data analysis in transportation research Express Buses, 15.7% Express Buses, 15.2% Local Buses, 76.3% Local Buses, 75.8% Demand Responsive, 8.5% Demand Responsive, 8.5% 2008 2009 Figure 5. Pie charts illustrating percent of revenue from each component of a transit system. If it is important to display the variability in the impact within the various bus routes in the local bus or express bus operations, this also can be illustrated (Figure 6). This type of diagram shows the maximum value, minimum value, and mean value of the percent increase in revenue across the 15 local bus routes and the 10 express bus routes. 5. Interpreting the results: These results indicate that changing from a variable fare based on trip length (2008) to a fixed fare (2009) on both the local bus routes and the express bus routes had little effect on revenue. On the local bus routes, there was an average increase in revenue of 3.1%. On the express bus routes, there was an average decrease in revenue of 0.4%. These changes altered the percentage of the total system revenue attributed to the local bus routes and the express bus routes. The local bus routes generated 76.3% of the revenue in 2009, compared to 75.8% in 2008. The percentage of revenue generated by the express bus routes dropped from 15.7% to 15.2%, and the demand-responsive system generated 8.5% in both 2008 and 2009. 6. Conclusion and Discussion: The total revenue increased from $6.02 million to $6.17 mil lion. The cost of operating a variable fare system is greater than that of operating a fixed fare system— hence, net income probably increased even more (more revenue, lower cost for fare collection), and the decision to modify the fare system seems reasonable. Notice that the entire discussion Figure 4. Variation in impact of fare change across system components. 0.94 0.51 0.94 0.52 4.57 4.71 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 Local Buses Express Buses Demand Responsive Re ve nu e (M illi on $ ) 2008 2009

examples of effective experiment Design and Data analysis in transportation research 19 also is based on the assumption that no other factors changed between 2008 and 2009 that might have affected total revenues. One of the implicit assumptions is that the number of riders remained relatively constant from 1 year to the next. If the ridership had changed, the statistics reported would have to be changed. Using the measure revenue/rider, for example, would help control (or normalize) for the variation in ridership. 7. Applications in Other Areas in Transportation Research: Descriptive statistics are widely used and can convey a great deal of information to a reader. They also can be used to present data in many areas of transportation research, including: • Transportation Planning—to display public response frequency or percentage to various alternative designs. • Traffic Operations—to display the frequency or percentage of crashes by route type or by the type of traffic control devices present at an intersection. • Airport Engineering—to display the arrival pattern of passengers or flights by hour or other time period. • Public Transit—to display the average load factor on buses by time of day. Example 3: Environment; Descriptive Statistics Area: Environment Method of Analysis: Descriptive statistics (organizing and presenting data to explain current conditions) 1. Research Question/Problem Statement: The planning and programming director in Envi- ronmental City wants to determine the current ozone concentration in the city. These data will be compared to data collected after the projects included in the Transportation Improvement Program (TIP) have been completed to determine the effects of these projects on the environ- ment. Because the terrain, the presence of hills or tall buildings, the prevailing wind direction, and the sample station location relative to high volume roads or industrial sites all affect the ozone level, multiple samples are required to determine the ozone concentration level in a city. For this example, air samples are obtained each weekday in the month of July (21 days) at 14 air-sampling stations in the city: 7 in the central city and 7 in the outlying areas of the city. The objective of the analysis is to determine the ozone concentration in the central city, the outlying areas of the city, and the city as a whole. Figure 6. Graph showing variation in revenue increase by type of bus route. -0.4 -1.3 -2.1 3.1 6.2 2.0 -3 -2 -1 0 1 2 3 4 5 6 7 Local Bus Routes Express Bus Routes Percent Increase in Revenue

20 effective experiment Design and Data analysis in transportation research 2. Identification and Description of Variables: The variable to be analyzed is the 8-hour average ozone concentration in parts per million (ppm) at each of the 14 air-sampling stations. The 8-hour average concentration is the basis for the EPA standard, and July is selected because ozone levels are temperature sensitive and increase with a rise in the temperature. 3. Data Collection: Ozone concentrations in ppm are recorded for each hour of the day at each of the 14 air-sampling stations. The highest average concentration for any 8-hour period during the day is recorded and tabulated. This results in 294 concentration observations (14 stations for 21 days). Table 6 and Table 7 show the data for the seven central city locations and the seven outlying area locations. 4. Specification of Analysis Technique and Data Analysis: Much of the data used in analyzing transportation issues has year-to-year, month-to-month, day-to-day, and even hour-to-hour variations. For this reason, making only one observation, or even a few observations, may not accurately describe the phenomenon being observed. Thus, standard practice is to obtain several observations and report the mean value of all observations. In this example, the phenomenon being observed is the daily ozone concentration at a series of air-sampling locations. The statistic to be estimated is the mean value of this variable over Question/Issue Use collected data to describe existing conditions and prepare for future analysis. In this example, air pollution levels in the central city, the outlying areas, and the overall city are to be described. Day Station 1 2 3 4 5 6 7 ∑ 1 0.079 0.084 0.081 0.083 0.088 0.086 0.089 0.590 2 0.082 0.087 0.088 0.086 0.086 0.087 0.081 0.597 3 0.080 0.081 0.077 0.072 0.084 0.083 0.081 0.558 4 0.083 0.086 0.082 0.079 0.086 0.087 0.089 0.592 5 0.082 0.087 0.080 0.075 0.090 0.089 0.085 0.588 6 0.075 0.084 0.079 0.076 0.080 0.083 0.081 0.558 7 0.078 0.079 0.080 0.074 0.078 0.080 0.075 0.544 8 0.081 0.077 0.082 0.081 0.076 0.079 0.074 0.540 9 0.088 0.084 0.083 0.085 0.083 0.083 0.088 0.594 10 0.085 0.087 0.086 0.089 0.088 0.087 0.090 0.612 11 0.079 0.082 0.082 0.089 0.091 0.089 0.090 0.602 12 0.078 0.080 0.081 0.086 0.088 0.089 0.089 0.591 13 0.081 0.079 0.077 0.083 0.084 0.085 0.087 0.576 14 0.083 0.080 0.079 0.081 0.080 0.082 0.083 0.568 15 0.084 0.083 0.080 0.085 0.082 0.086 0.085 0.585 16 0.086 0.087 0.085 0.087 0.089 0.090 0.089 0.613 17 0.082 0.085 0.083 0.090 0.087 0.088 0.089 0.604 18 0.080 0.081 0.080 0.087 0.085 0.086 0.088 0.587 19 0.080 0.083 0.077 0.083 0.085 0.084 0.087 0.579 20 0.081 0.084 0.079 0.082 0.081 0.083 0.088 0.578 21 0.082 0.084 0.080 0.081 0.082 0.083 0.085 0.577 ∑ 1.709 1.744 1.701 1.734 1.773 1.789 1.793 12.243 Table 6. Central city 8-hour ozone concentration samples (ppm).

examples of effective experiment Design and Data analysis in transportation research 21 the test period selected. The mean value of any data set (x _ ) equals the sum of all observations in the set divided by the total number of observations in the set (n): x x n i i n = = ∑ 1 The variables of interest stated in the research question are the average ozone concentration for the central city, the outlying areas, and the total city. Thus, there are three data sets: the first table, the second table, and the sum of the two tables. The first data set has a sample size of 147; the second data set also has a sample size of 147, and the third data set contains 294 observations. Using the formula just shown, the mean value of the ozone concentration in the central city is calculated as follows: x xi i = = = = ∑ 147 12 243 147 0 083 1 147 . . ppm The mean value of the ozone concentration in the outlying areas of the city is: x xi i = = = = ∑ 147 10 553 147 0 072 1 147 . . ppm The mean value of the ozone concentration for the entire city is: x xi i = = = = ∑ 294 22 796 294 0 078 1 294 . . ppm Day Station 8 9 10 11 12 13 14 ∑ 1 0.072 0.074 0.073 0.071 0.079 0.070 0.074 0.513 2 0.074 0.075 0.077 0.075 0.081 0.075 0.077 0.534 3 0.070 0.072 0.074 0.074 0.083 0.078 0.080 0.531 4 0.067 0.070 0.071 0.077 0.080 0.077 0.081 0.523 5 0.064 0.067 0.068 0.072 0.079 0.078 0.079 0.507 6 0.069 0.068 0.066 0.070 0.075 0.079 0.082 0.509 7 0.071 0.069 0.070 0.071 0.074 0.071 0.077 0.503 8 0.073 0.072 0.074 0.072 0.076 0.073 0.078 0.518 9 0.072 0.075 0.077 0.074 0.078 0.074 0.080 0.530 10 0.074 0.077 0.079 0.077 0.080 0.076 0.079 0.542 11 0.070 0.072 0.075 0.074 0.079 0.074 0.078 0.522 12 0.068 0.067 0.068 0.070 0.074 0.070 0.075 0.492 13 0.065 0.063 0.067 0.068 0.072 0.067 0.071 0.473 14 0.063 0.062 0.067 0.069 0.073 0.068 0.073 0.475 15 0.064 0.064 0.066 0.067 0.070 0.066 0.070 0.467 16 0.061 0.059 0.062 0.062 0.067 0.064 0.069 0.434 17 0.065 0.061 0.060 0.064 0.069 0.066 0.073 0.458 18 0.067 0.063 0.065 0.068 0.073 0.069 0.076 0.499 19 0.069 0.067 0.068 0.072 0.077 0.071 0.078 0.502 20 0.071 0.069 0.070 0.074 0.080 0.074 0.077 0.515 21 0.070 0.065 0.072 0.076 0.079 0.073 0.079 0.514 ∑ 1.439 1.431 1.409 1.497 1.598 1.513 1.606 10.553 Table 7. Outlying area 8-hour ozone concentration samples (ppm).

22 effective experiment Design and Data analysis in transportation research Using the same equation, the mean value for each air-sampling location can be found by summing the value of the ozone concentration in the column representing that location and dividing by the 21 observations at that location. For example, considering Sample Station 1, the mean value of the ozone concentration is 1.709/21 = 0.081 ppm. Similarly, the mean value of the ozone concentrations for any specific day can be found by summing the ozone concentration values in the row representing that day and dividing by the number of stations. For example, for Day 1, the mean value of the ozone concentration in the central city is 0.590/7=0.084. In the outlying areas of the city, it is 0.513/7=0.073, and for the entire city it is 1.103/14=0.079. The highest and lowest values of the ozone concentration can be obtained by searching the two tables. The highest ozone concentration (0.091 ppm) is logged as having occurred at Station 5 on Day 11. The lowest ozone concentration (0.059 ppm) occurred at Station 9 on Day 16. The variation by sample location can be illustrated in the form of a frequency diagram. A graph can be used to show the variation in the average ozone concentration for the seven sample stations in the central city (Figure 7). Notice that all of these calculations (and more) can be done very easily if all the data are put in a spreadsheet and various statistical functions used. Graphs and other displays also can be made within the spreadsheet. 5. Interpreting the Results: In this example, the data are not tested to determine whether they fit a known distribution or whether one average value is significantly higher or lower than another. It can only be reported that, as recorded in July, the mean ozone concentration in the central city was greater than the concentration in the outlying areas of the city. (For testing to see whether the data fit a known distribution or comparing mean values, see Example 4 on fitting distribu- tions and goodness of fit. For comparing mean values, see examples 5 through 7.) It is known that ozone concentration varies by day and by location of the air-sampling equipment. If there is some threshold value of importance, such as the ozone concentration level considered acceptable by the EPA, these data could be used to determine the number of days that this level was exceeded, or the number of stations that recorded an ozone concentration above this threshold. This is done by comparing each day or each station with the threshold 0.081 0.083 0.081 0.083 0.084 0.085 0.085 0.070 0.072 0.074 0.076 0.078 0.080 0.082 0.084 0.086 1 2 3 4 5 6 7 Station A ve ra ge o zo ne c on ce nt ra tio n Figure 7. Average ozone concentration for seven central city sampling stations (ppm).

examples of effective experiment Design and Data analysis in transportation research 23 value. It must be noted that, as presented, this example is not a statistical comparison per se (i.e., there has been no significance testing or formal statistical comparison). 6. Conclusion and Discussion: This example illustrates how to determine and present quanti- tative information about a data set containing values of a varying parameter. If a similar set of data were captured each month, the variation in ozone concentration could be analyzed to describe the variation over the year. Similarly, if data were captured at these same locations in July of every year, the trend in ozone concentration over time could be determined. 7. Applications in Other Areas in Transportation: These descriptive statistics techniques can be used to present data in other areas of transportation research, such as: • Traffic Operations/Safety and Transportation Planning – to analyze the average speed of vehicles on streets with a speed limit of 45 miles per hour (mph) in residential, commercial, and industrial areas by sampling a number of streets in each of these area types. – to examine the average emergency vehicle response time to various areas of the city or county, by analyzing dispatch and arrival times for emergency calls to each area of interest. • Pavement Engineering—to analyze the average number of potholes per mile on pavement as a function of the age of pavement, by sampling a number of streets where the pavement age falls in discrete categories (0 to 5 years, 5 to 10 years, 10 to 15 years, and greater than 15 years). • Traffic Safety—to evaluate the average number of crashes per month at intersections with two-way STOP control versus four-way STOP control by sampling a number of intersections in each category over time. Example 4: Traffic Operations; Goodness of Fit Area: Traffic operations Method of Analysis: Goodness of fit (chi-square test; determining if observed distributions of data fit hypothesized standard distributions) 1. Research Question/Problem Statement: A research team is developing a model to estimate travel times of various types of personal travel (modes) on a path shared by bicyclists, in-line skaters, and others. One version of the model relies on the assertion that the distribution of speeds for each mode conforms to the normal distribution. (For a helpful definition of this and other statistical terms, see the glossary in NCHRP Project 20-45, Volume 2, Appendix A.) Based on a literature review, the researchers are sure that bicycle speeds are normally distributed. However, the shapes of the speed distributions for other users are unknown. Thus, the objective is to determine if skater speeds are normally distributed in this instance. Question/Issue Do collected data fit a specific type of probability distribution? In this example, do the speeds of in-line skaters on a shared-use path follow a normal distribution (are they normally distributed)? 2. Identification and Description of Variables: The only variable collected is the speed of in-line skaters passing through short sections of the shared-use path. 3. Data Collection: The team collects speeds using a video camera placed where most path users would not notice it. The speed of each free-flowing skater (i.e., each skater who is not closely following another path user) is calculated from the times that the skater passes two benchmarks on the path visible in the camera frame. Several days of data collection allow a large sample of 219 skaters to be measured. (An implicit assumption is made that there is no

24 effective experiment Design and Data analysis in transportation research variation in the data by day.) The data have a familiar bell shape; that is, when graphed, they look like they are normally distributed (Figure 8). Each bar in the figure shows the number of observations per 1.00-mph-wide speed bin. There are 10 observations between 6.00 mph and 6.99 mph. 4. Specification of Analysis Technique and Data Analysis: This analysis involves several pre- liminary steps followed by two major steps. In the preliminaries, the team calculates the mean and standard deviation from the data sample as 10.17 mph and 2.79 mph, respectively, using standard formulas described in NCHRP Project 20-45, Volume 2, Chapter 6, Section C under the heading “Frequency Distributions, Variance, Standard Deviation, Histograms, and Boxplots.” Then the team forms bins of observations of sufficient size to conduct the analysis. For this analysis, the team forms bins containing at least four observations each, which means forming a bin for speeds of 5 mph and lower and a bin for speeds of 17 mph or higher. There is some argument regarding the minimum allowable cell size. Some analysts argue that the minimum is five; others argue that the cell size can be smaller. Smaller numbers of observations in a bin may distort the results. When in doubt, the analysis can be done with different assumptions regarding the cell size. The left two columns in Table 8 show the data ready for analysis. The first major step of the analysis is to generate the theoretical normal distribution to compare to the field data. To do this, the team calculates a value of Z, the standard normal variable for each bin i, using the following equation: Z xi = − µ σ where x is the speed in miles per hour (mph) corresponding to the bin, µ is the mean speed, and s is the standard deviation of all of the observations in the speed sample in mph. For example (and with reference to the data in Table 8), for a speed of 5 mph the value of Z will be (5 - 10.17)/2.79 = -1.85 and for a speed of 6 mph, the value of Z will be (6 - 10.17)/2.79 = -1.50. The team then consults a table of standard normal values (i.e., NCHRP Project 20-45, Volume 2, Appendix C, Table C-1) to convert these Z values into A values representing the area under the standard normal distribution curve. The A value for a Z of -1.85 is 0.468, while the A value for a Z of -1.50 is 0.432. The difference between these two A values, representing the area under the standard normal probability curve corresponding to the speed of 6 mph, is 0.036 (calculated 0.468 - 0.432 = 0.036). The team multiplies 0.036 by the total sample size (219), to estimate that there should be 7.78 skaters with a speed of 6 mph if the speeds follow the standard normal distribution. The team follows Figure 8. Distribution of observed in-line skater speeds. 0 5 10 15 20 25 30 35 40 1 3 5 7 9 11 13 15 17 232119 Speed, mph Nu m be r o f o bs er va tio ns

examples of effective experiment Design and Data analysis in transportation research 25 a similar procedure for all speeds. Notice that the areas under the curve can also be calculated in a simple Excel spreadsheet using the “NORMDIST” function for a given x value and the average speed of 10.17 and standard deviation of 2.79. The values shown in Table 8 have been estimated using the Excel function. The second major step of the analysis is to use the chi-square test (as described in NCHRP Project 20-45, Volume 2, Chapter 6, Section F) to determine if the theoretical normal distribution is significantly different from the actual data distribution. The team computes a chi-square value for each bin i using the formula: χi i i i O E E 2 2 = −( ) where Oi is the number of actual observations in bin i and Ei is the expected number of obser- vations in bin i estimated by using the theoretical distribution. For the bin of 6 mph speeds, O = 10 (from the table), E = 7.78 (calculated), and the ci2 contribution for that cell is 0.637. The sum of the ci2 values for all bins is 19.519. The degrees of freedom (df) used for this application of the chi-square test are the number of bins minus 1 minus the number of variables in the distribution of interest. Given that the normal distribution has two variables (see May, Traffic Flow Fundamentals, 1990, p. 40), in this example the degrees of freedom equal 9 (calculated 12 - 1 - 2 = 9). From a standard table of chi-square values (NCHRP Project 20-45, Volume 2, Appendix C, Table C-2), the team finds that the critical value at the 95% confidence level for this case (with df = 9) is 16.9. The calculated value of the statistic is ~19.5, more than the tabular value. The results of all of these observations and calculations are shown in Table 8. 5. Interpreting the Results: The calculated chi-square value of ~19.5 is greater than the criti- cal chi-square value of 16.9. The team concludes, therefore, that the normal distribution is significantly different from the distribution of the speed sample at the 95% level (i.e., that the in-line skater speed data do not appear to be normally distributed). Larger variations between the observed and expected distributions lead to higher values of the statistic and would be interpreted as it being less likely that the data are distributed according to the Speed (mph) Number of Observations Number Predicted by Normal Distribution Chi-Square Value Under 5.99 6 6.98 0.137 6.00 to 6.99 10 7.78 0.637 7.00 to 7.99 18 13.21 1.734 8.00 to 8.99 24 19.78 0.902 9.00 to 9.99 37 26.07 4.585 10.00 to 10.99 38 30.26 1.980 11.00 to 11.99 24 30.93 1.554 12.00 to 12.99 21 27.85 1.685 13.00 to 13.99 15 22.08 2.271 14.00 to 14.99 13 15.42 0.379 15.00 to 15.99 4 9.48 3.169 16.00 to 16.99 4 5.13 0.251 17.00 and over 5 4.03 0.234 Total 219 219 19.519 Table 8. Observations, theoretical predictions, and chi-square values for each bin.

26 effective experiment Design and Data analysis in transportation research hypothesized distribution. Conversely, smaller variations between observed and expected distributions result in lower values of the statistic, which would suggest that it is more likely that the data are normally distributed because the observed values would fit better with the expected values. 6. Conclusion and Discussion: In this case, the results suggest that the normal distribution is not a good fit to free-flow speeds of in-line skaters on shared-use paths. Interestingly, if the 23 mph observation is considered to be an outlier and discarded, the results of the analysis yield a different conclusion (that the data are normally distributed). Some researchers use a simple rule that an outlier exists if the observation is more than three standard deviations from the mean value. (In this example, the 23 mph observation is, indeed, more than three standard deviations from the mean.) If there is concern with discarding the observation as an outlier, it would be easy enough in this example to repeat the data collection exercise. Looking at the data plotted above, it is reasonably apparent that the well-known normal distribution should be a good fit (at least without the value of 23). However, the results from the statistical test could not confirm the suspicion. In other cases, the type of distribution may not be so obvious, the distributions in question may be obscure, or some distribution parameters may need to be calibrated for a good fit. In these cases, the statistical test is much more valuable. The chi-square test also can be used simply to compare two observed distributions to see if they are the same, independent of any underlying probability distribution. For example, if it is desired to know if the distribution of traffic volume by vehicle type (e.g., automobiles, light trucks, and so on) is the same at two different freeway locations, the two distributions can be compared to see if they are similar. The consequences of an error in the procedure outlined here can be severe. This is because the distributions chosen as a result of the procedure often become the heart of predictive models used by many other engineers and planners. A poorly-chosen distribution will often provide erroneous predictions for many years to come. 7. Applications in Other Areas of Transportation Research: Fitting distributions to data samples is important in several areas of transportation research, such as: • Traffic Operations—to analyze shapes of vehicle headway distributions, which are of great interest, especially as a precursor to calibrating and using simulation models. • Traffic Safety—to analyze collision frequency data. Analysts often assume that the Poisson distribution is a good fit for collision frequency data and must use the method described here to validate the claim. • Pavement Engineering—to form models of pavement wear or otherwise compare results obtained using different designs, as it is often required to check the distributions of the parameters used (e.g., roughness). Example 5: Construction; Simple Comparisons to Specified Values Area: Construction Method of Analysis: Simple comparisons to specified values—using Student’s t-test to compare the mean value of a small sample to a standard or other requirement (i.e., to a population with a known mean and unknown standard deviation or variance) 1. Research Question/Problem Statement: A contractor wants to determine if a specified soil compaction can be achieved on a segment of the road under construction by using an on-site roller or if a new roller must be brought in.

examples of effective experiment Design and Data analysis in transportation research 27 The cost of obtaining samples for many construction materials and practices is quite high. As a result, decisions often must be made based on a small number of samples. The appropri- ate statistical technique for comparing the mean value of a small sample with a standard or requirement is Student’s t-test. Formally, the working, or null, hypothesis (Ho) and the alternative hypothesis (Ha) can be stated as follows: Ho: The soil compaction achieved using the on-site roller (CA) is less than a specified value (CS); that is, (CA < CS). Ha: The soil compaction achieved using the on-site roller (CA) is greater than or equal to the specified value (CS); that is, (CA ≥ CS). Question/Issue Determine whether a sample mean exceeds a specified value. Alternatively, deter- mine the probability of obtaining a sample mean (x _ ) from a sample of size n, if the universe being sampled has a true mean less than or equal to a population mean with an unknown variance. In this example, is an observed mean of soil compaction samples equal to or greater than a specified value? 2. Identification and Description of Variables: The variable to be used is the soil density results of nuclear densometer tests. These values will be used to determine whether the use of the on-site roller is adequate to meet the contract-specified soil density obtained in the laboratory (Proctor density) of 95%. 3. Data Collection: A 125-foot section of road is constructed and compacted with the on-site roller, and four samples of the soil density are obtained (25 feet, 50 feet, 75 feet, and 100 feet from the beginning of the test section). 4. Specification of Analysis Technique and Data Analysis: For small samples (n < 30) where the population mean is known but the population standard deviation is unknown, it is not appropriate to describe the distribution of the sample mean with a normal distribution. The appropriate distribution is called Student’s distribution (t-distribution or t-statistic). The equation for Student’s t-statistic is: t x x S n = − ′ where x _ is the sample mean, x _ ′ is the population mean (or specified standard), S is the sample standard deviation, and n is the sample size. The four nuclear densometer readings were 98%, 97%, 93% and 99%. Then, showing some simple sample calculations, X X S X i i i n = = + + + = = = = = ∑ 4 98 97 93 99 4 387 4 96 75 1 4 1 . % Σ i X n S −( ) − = = 2 1 20 74 3 2 63 . . %

28 effective experiment Design and Data analysis in transportation research and using the equation for t above, t = − = = 96 75 95 00 2 63 2 1 75 1 32 1 33 . . . . . . The calculated value of the t-statistic (1.33) is most typically compared to the tabularized values of the t-statistic (e.g., NCHRP Project 20-45, Volume 2, Appendix C, Table C-4) for a given significance level (typically called t critical or tcrit). For a sample size of n = 4 having 3 (n - 1) degrees of freedom (df), the values for tcrit are: 1.638 for a = 0.10 and 2.353 for a = 0.05 (two common values of a for testing, the latter being most common). Important: The specification of the significance level (a level) for testing should be done before actual testing and interpretation of results are done. In many instances, the appropriate level is defined by the agency doing the testing, a specified testing standard, or simply common practice. Generally speaking, selection of a smaller value for a (e.g., a = 0.05 versus a = 0.10) sets a more stringent standard. In this example, because the calculated value of t (1.33) is less than the critical value (2.353, given a = 0.05), the null hypothesis is accepted. That is, the engineer cannot be confident that the mean value from the densometer tests (96.75%) is greater than the required specifica- tion (95%). If a lower confidence level is chosen (e.g., a = 0.15), the value for tcrit would change to 1.250, which means the null hypothesis would be rejected. A lower confidence level can have serious implications. For example, there is an approximately 15% chance that the standard will not be met. That level of risk may or may not be acceptable to the contractor or the agency. Notice that in many standards the required significance level is stated (typically a = 0.05). It should be emphasized that the confidence level should be chosen before calculations and testing are done. It is not generally permissible to change the confidence level after calculations have been performed. Doing this would be akin to arguing that standards can be relaxed if a test gives an answer that the analyst doesn’t like. The results of small sample tests often are sensitive to the number of samples that can be obtained at a reasonable cost. (The mean value may change considerably as more data are added.) In this example, if it were possible to obtain nine independent samples (as opposed to four) and the mean value and sample standard deviation were the same as with the four samples, the calculation of the t-statistic would be: t = − = 96 75 95 00 2 63 3 1 99 . . . . Comparing the value of t (with a larger sample size) to the appropriate tcrit (for n - 1 = 8 df and a = 0.05) of 1.860 changes the outcome. That is, the calculated value of the t-statistic is now larger than the tabularized value of tcrit, and the null hypothesis is rejected. Thus, it is accepted that the mean of the densometer readings meets or exceeds the standard. It should be noted, however, that the inclusion of additional tests may yield a different mean value and standard deviation, in which case the results could be different. 5. Interpreting the Results: By themselves, the results of the statistical analysis are insufficient to answer the question as to whether a new roller should be brought to the project site. These results only provide information the contractor can use to make this decision. The ultimate decision should be based on these probabilities and knowledge of the cost of each option. What is the cost of bringing in a new roller now? What is the cost of starting the project and then determining the current roller is not adequate and then bringing in a new roller? Will this decision result in a delay in project completion—and does the contract include an incentive for early completion and/or a penalty for missing the completion date? If it is possible to conduct additional independent densometer tests, what is the cost of conducting them?

examples of effective experiment Design and Data analysis in transportation research 29 If there is a severe penalty for missing the deadline (or a significant reward for finishing early), the contractor may be willing to incur the cost of bringing in a new roller rather than accepting a 15% probability of being delayed. 6. Conclusion and Discussion: In some cases the decision about which alternative is preferable can be expressed in the form of a probability (or level of confidence) required to make a deci- sion. The decision criterion is then expressed in a hypothesis and the probability of rejecting that hypothesis. In this example, if the hypothesis to be tested is “Using the on-site roller will provide an average soil density of 95% or higher” and the level of confidence is set at 95%, given a sample of four tests the decision will be to bring in a new roller. However, if nine independent tests could be conducted, the results in this example would lead to a decision to use the on-site roller. 7. Applications in Other Areas in Transportation Research: Simple comparisons to specified values can be used in a variety of areas of transportation research. Some examples include: • Traffic Operations—to compare the average annual number of crashes at intersections with roundabouts with the average annual number of crashes at signalized intersections. • Pavement Engineering—to test the comprehensive strength of concrete slabs. • Maintenance—to test the results of a proposed new deicer compound. Example 6: Maintenance; Simple Two-Sample Comparisons Area: Maintenance Method of Analysis: Simple two-sample comparisons (t-test for paired comparisons; com- paring the mean values of two sets of matched data) 1. Research Question/Problem Statement: As a part of a quality control and quality assurance (QC/QA) program for highway maintenance and construction, an agency engineer wants to compare and identify discrepancies in the contractor’s testing procedures or equipment in making measurements on materials being used. Specifically, compacted air voids in asphalt mixtures are being measured. In this instance, the agency’s test results need to be compared, one-to-one, with the contractor’s test results. Samples are drawn or made and then literally split and tested—one by the contractor, one by the agency. Then the pairs of measurements are analyzed. A paired t-test will be used to make the comparison. (For another type of two-sample comparison, see Example 7.) Question/Issue Use collected data to test if two sets of results are similar. Specifically, do two test- ing procedures to determine air voids produce the same results? Stated in formal terms, the null and alternative hypotheses are: Ho: There is no mean difference in air voids between agency and contractor test results: H Xo d: = 0 Ha: There is a mean difference in air voids between agency and contractor test results: H Xa d: ≠ 0 (For definitions and more discussion about the formulation of formal hypotheses for test- ing, see NCHRP Project 20-45, Volume 2, Appendix A and Volume 1, Chapter 2, “Hypothesis.”) 2. Identification and Description of Variables: The testing procedure for laboratory-compacted air voids in the asphalt mixture needs to be verified. The split-sample test results for laboratory-

30 effective experiment Design and Data analysis in transportation research compacted air voids are shown in Table 9. Twenty samples are prepared using the same asphalt mixture. Half of the samples are prepared in the agency’s laboratory and the other half in the contractor’s laboratory. Given this arrangement, there are basically two variables of concern: who did the testing and the air void determination. 3. Data Collection: A sufficient quantity of asphalt mix to make 10 lots is produced in an asphalt plant located on a highway project. Each of the 10 lots is collected, split into two samples, and labeled. A sample from each lot, 4 inches in diameter and 2 inches in height, is prepared in the contractor’s laboratory to determine the air voids in the compacted samples. A matched set of samples is prepared in the agency’s laboratory and a similar volumetric procedure is used to determine the agency’s lab-compacted air voids. The lab-compacted air void contents in the asphalt mixture for both the contractor and agency are shown in Table 9. 4. Specification of Analysis Technique and Data Analysis: A paired (two-sided) t-test will be used to determine whether a difference exists between the contractor and agency results. As noted above, in a paired t-test the null hypothesis is that the mean of the differences between each pair of two tests is 0 (there is no difference between the means). The null hypothesis can be expressed as follows: H Xo d: = 0 The alternate hypothesis, that the two means are not equal, can be expressed as follows: H Xa d: ≠ 0 The t-statistic for the paired measurements (i.e., the difference between the split-sample test results) is calculated using the following equation: t X s n d d = − 0 Using the actual data, the value of the t-statistic is calculated as follows: t = − = 0 88 0 0 7 10 4 . . Sample Air Voids (%) DifferenceContractor Agency 1 4.37 4.15 0.21 2 3.76 5.39 -1.63 3 4.10 4.47 -0.37 4 4.39 4.52 -0.13 5 4.06 5.36 -1.29 6 4.14 5.01 -0.87 7 3.92 5.23 -1.30 8 3.38 4.97 -1.60 9 4.12 4.37 -0.25 10 3.68 5.29 -1.61 X 3.99 4.88 dX = -0.88 S 0.31 0.46 ds = 0.70 Table 9. Laboratory-compacted air voids in split samples.

examples of effective experiment Design and Data analysis in transportation research 31 For n - 1 (10 - 1 = 9) degrees of freedom and a = 0.05, the tcrit value can be looked up using a t-table (e.g., NCHRP Project 20-45, Volume 2, Appendix C, Table C-4): t0 025 9 2 262. , .= For a more detailed description of the t-statistic, see the glossary in NCHRP Project 20-45, Volume 2, Appendix A. 5. Interpreting the Results: Given that t = 4 > t0.025, 9 = 2.685, the engineer would reject the null hypothesis and conclude that the results of the paired tests are different. This means that the contractor and agency test results from paired measurements indicate that the test method, technicians, and/or test equipment are not providing similar results. Notice that the engineer cannot conclude anything about the material or production variation or what has caused the differences to occur. 6. Conclusion and Discussion: The results of the test indicate that a statistically significant difference exists between the test results from the two groups. When making such comparisons, it is important that random sampling be used when obtaining the samples. Also, because sources of variability influence the population parameters, the two sets of test results must have been sampled over the same time period, and the same sampling and testing procedures must have been used. It is best if one sample is drawn and then literally split in two, then another sample drawn, and so on. The identification of a difference is just that: notice that a difference exists. The reason for the difference must still be determined. A common misinterpretation is that the result of the t-test provides the probability of the null hypothesis being true. Another way to look at the t-test result in this example is to conclude that some alternative hypothesis provides a better description of the data. The result does not, however, indicate that the alternative hypothesis is true. To ensure practical significance, it is necessary to assess the magnitude of the difference being tested. This can be done by computing confidence intervals, which are used to quantify the range of effect size and are often more useful than simple hypothesis testing. Failure to reject a hypothesis also provides important information. Possible explanations include: occurrence of a type-II error (erroneous acceptance of the null hypothesis); small sample size; difference too small to detect; expected difference did not occur in data; there is no difference/effect. Proper experiment design and data collection can minimize the impact of some of these issues. (For a more comprehensive discussion of this topic, see NCHRP Project 20-45, Volume 2, Chapter 1.) 7. Applications in Other Areas of Transportation Research: The application of the t-test to compare two mean values in other areas of transportation research may include: • Traffic Operations—to evaluate average delay in bus arrivals at various bus stops. • Traffic Operations/Safety—to determine the effect of two enforcement methods on reduction in a particular traffic violation. • Pavement Engineering—to investigate average performance of two pavement sections. • Environment—to compare average vehicular emissions at two locations in a city. Example 7: Materials; Simple Two-Sample Comparisons Area: Materials Method of Analysis: Simple two-sample comparisons (using the t-test to compare the mean values of two samples and the F-test for comparing variances) 1. Research Question/Problem Statement: As a part of dispute resolution during quality control and quality assurance, a highway agency engineer wants to validate a contractor’s test results concerning asphalt content. In this example, the engineer wants to compare the results

32 effective experiment Design and Data analysis in transportation research of two sets of tests: one from the contractor and one from the agency. Formally, the (null) hypothesis to be tested, Ho, is that the contractor’s tests and the agency’s tests are from the same population. In other words, the null hypothesis is that the means of the two data sets will be equal, as will the standard deviations. Notice that in the latter instance the variances are actually being compared. Test results were also compared in Example 6. In that example, the comparison was based on split samples. The same test specimens were tested by two different analysts using different equipment to see if the same results could be obtained by both. The major difference between Example 6 and Example 7 is that, in this example, the two samples are randomly selected from the same pavement section. Question/Issue Use collected data to test if two measured mean values are the same. In this instance, are two mean values of asphalt content the same? Stated in formal terms, the null and alternative hypotheses can be expressed as follows: Ho: There is no difference in asphalt content between agency and contractor test results: H m mo c a: − =( )0 Ha: There is a difference in asphalt content between agency and contractor test results: H m ma c a: − ≠( )0 2. Identification and Description of Variables: The contractor runs 12 asphalt content tests and the agency engineer runs 6 asphalt content tests over the same period of time, using the same random sampling and testing procedures. The question is whether it is likely that the tests have come from the same population based on their variability. 3. Data Collection: If the agency’s objective is simply to identify discrepancies in the testing procedures or equipment, then verification testing should be done on split samples (as in Example 6). Using split samples, the difference in the measured variable can more easily be attributed to testing procedures. A paired t-test should be used. (For more information, see NCHRP Project 20-45, Volume 2, Chapter 4, Section A, “Analysis of Variance Methodology.”) A split sample occurs when a physical sample (of whatever is being tested) is drawn and then literally split into two testable samples. On the other hand, if the agency’s objective is to identify discrepancies in the overall material, process, sampling, and testing processes, then validation testing should be done on independent samples. Notice the use of these terms. It is important to distinguish between testing to verify only the testing process (verification) versus testing to compare the overall production, sampling, and testing processes (validation). If independent samples are used, the agency test results still can be compared with contractor test results (using a simple t-test for comparing two means). If the test results are consistent, then the agency and contractor tests can be combined for contract compliance determination. 4. Specification of Analysis Technique and Data Analysis: When comparing the two data sets, it is important to compare both the means and the variances because the assumption when using the t-test requires equal variances for each of the two groups. A different test is used in each instance. The F-test provides a method for comparing the variances (the standard devia- tion squared) of two sets of data. Differences in means are assessed by the t-test. Generally, construction processes and material properties are assumed to follow a normal distribution.

examples of effective experiment Design and Data analysis in transportation research 33 In this example, a normal distribution is assumed. (The assumption of normality also can be tested, as in Example 4.) The ratios of variances follow an F-distribution, while the means of relatively small samples follow a t-distribution. Using these distributions, hypothesis tests can be conducted using the same concepts that have been discussed in prior examples. (For more information about the F-test and the t-distribution, see NCHRP Project 20-45, Volume 2, Chapter 4, Section A, “Compute the F-ratio Test Statistic.” For more information about the t-distribution, see NCHRP Project 20-45, Volume 2, Chapter 4, Section A.) For samples from the same normal population, the statistic F (the ratio of the two-sample variances) has a sampling distribution called the F-distribution. For validation and verification testing, the F-test is based on the ratio of the sample variance of the contractor’s test results (sc 2) and the sample variance of the agency’s test results (sa 2). Similarly, the t-test can be used to test whether the sample mean of the contractor’s tests, X _ c, and the agency’s tests, X _ a, came from populations with the same mean. Consider the asphalt content test results from the contractor samples and agency samples (Table 10). In this instance, the F-test is used to determine whether the variance observed for the contractor’s tests differs from the variance observed for the agency’s tests. Using the F-test Step 1. Compute the variance (s2), for each set of tests: sc 2 = 0.064 and sa 2 = 0.092. As an example, sc 2 can be calculated as: s x X n c i c i2 2 2 2 1 6 4 6 1 11 6 2 6 1 11 = −( ) − = −( ) + −( )∑ . . . . + + −( ) + −( ) =. . . . . . . 6 6 1 11 5 7 6 1 11 0 0645 2 2 Step 2. Compute F s s calc a c = = = 2 2 0 092 0 064 1 43 . . . . Contractor Samples Agency Samples 1 6.4 1 5.4 2 6.2 2 5.8 3 6.0 3 6.2 4 6.6 4 5.4 5 6.1 5 5.6 6 6.0 6 5.8 7 6.3 8 6.1 9 5.9 10 5.8 11 6.0 12 5.7 Descriptive Statistics = 6.1cX Descriptive Statistics = 5.7aX = 0.0642cs = 0.0922as = 0.25cs = 0.30as = 12cn = 6an Table 10. Asphalt content test results from independent samples.

34 effective experiment Design and Data analysis in transportation research Step 3. Determine Fcrit from the F-distribution table, making sure to use the correct degrees of freedom (df) for the numerator (the number of observations minus 1, or na - 1 = 6 - 1 = 5) and the denominator (nc - 1 = 12 - 1 = 11). For a = 0.01, Fcrit = 5.32. The critical F-value can be found from tables (see NCHRP Project 20-45, Volume 2, Appendix C, Table C-5). Read the F-value for 1 - a = 0.99, numerator and denominator degrees of freedom 5 and 11, respectively. Interpolation can be used if exact degrees of freedom are not available in the table. Alternatively, a statistical function in Microsoft Excel™ can be used to determine the F-value. Step 4. Compare the two values to determine if Fcalc < Fcrit. If Fcalc < Fcrit is true, then the variances are equal; if not, they are unequal. In this example, Fcalc (1.43) is, in fact, less than Fcrit (5.32) and, thus, there is no evidence of unequal variances. Given this result, the t-test for the case of equal variances is used to determine whether to declare that the mean of the contractor’s tests differs from the mean of the agency’s tests. Using the t-test Step 1. Compute the sample means (X _ ) for each set of tests: X _ c = 6.1 and X _ a = 5.7. Step 2. Compute the pooled variance sp 2 from the individual sample variances: s s n s n n n p c c a a c a 2 2 21 1 2 0 064 12 1 = −( )+ −( ) + − = −( )+. 0 092 6 1 12 6 2 0 0731 . . −( ) + − = Step 3. Compute the t-statistic using the following equation for equal variance: t X X s n s n c a p c p a = − + = − + = 2 2 6 1 5 7 0 0731 12 0 0731 6 . . . . 2 9. t0 005 16 2 921. , .= (For more information, see NCHRP Project 20-45, Volume 2, Appendix C, Table C-4 for A v= − =1 2 16 α and .) 5. Interpreting the Results: Given that F < Fcrit (i.e., 1.43 < 5.32), there is no reason to believe that the two sets of data have different variances. That is, they could have come from the same population. Therefore, the t-test can be used to compare the means using equal variance. Because t < tcrit (i.e., 2.9 < 2.921), the engineer does not reject the null hypothesis and, thus, assumes that the sample means are equal. The final conclusion is that it is likely that the contractor and agency test results represent the same process. In other words, with a 99% confidence level, it can be said that the agency’s test results are not different from the contrac- tor’s and therefore validate the contractor tests. 6. Conclusion and Discussion: The simple t-test can be used to validate the contractor’s test results by conducting independent sampling from the same pavement at the same time. Before conducting a formal t-test to compare the sample means, the assumption of equal variances needs to be evaluated. This can be accomplished by comparing sample variances using the F-test. The interpretation of results will be misleading if the equal variance assumption is not validated. If the variances of two populations being compared for their means are different, the mean comparison will reflect the difference between two separate populations. Finally, based on the comparison of means, one can conclude that the construction materials have consistent properties as validated by two independent sources (contractor and agency). This sort of comparison is developed further in Example 8, which illustrates tests for the equality of more than two mean values.

examples of effective experiment Design and Data analysis in transportation research 35 7. Applications in Other Areas of Transportation Research: The simple t-test can be used to compare means of two independent samples. Applications for this method in other areas of transportation research may include: • Traffic Operations – to compare average speeds at two locations along a route. – to evaluate average delay times at two intersections in an urban area. • Pavement Engineering—to investigate the difference in average performance of two pavement sections. • Maintenance—to determine the effects of two maintenance treatments on average life extension of two pavement sections. Example 8: Laboratory Testing/Instrumentation; Simple Analysis of Variance (ANOVA) Area: Laboratory testing and/or instrumentation Method of Analysis: Simple analysis of variance (ANOVA) comparing the mean values of more than two samples and using the F-test 1. Research Question/Problem Statement: An engineer wants to test and compare the com- pressive strength of five different concrete mix designs that vary in coarse aggregate type, gradation, and water/cement ratio. An experiment is conducted in a laboratory where five different concrete mixes are produced based on given specifications, and tested for com- pressive strength using the ASTM International standard procedures. In this example, the comparison involves inference on parameters from more than two populations. The purpose of the analysis, in other words, is to test whether all mix designs are similar to each other in mean compressive strength or whether some differences actually exist. ANOVA is the statistical procedure used to test the basic hypothesis illustrated in this example. Question/Issue Compare the means of more than two samples. In this instance, compare the compres- sive strengths of five concrete mix designs with different combinations of aggregates, gradation, and water/cement ratio. More formally, test the following hypotheses: Ho: There is no difference in mean compressive strength for the various (five) concrete mix types. Ha: At least one of the concrete mix types has a different compressive strength. 2. Identification and Description of Variables: In this experiment, the factor of interest (independent variable) is the concrete mix design, which has five levels based on differ- ent coarse aggregate types, gradation, and water/cement ratios (denoted by t and labeled A through E in Table 11). Compressive strength is a continuous response (dependent) variable, measured in pounds per square inch (psi) for each specimen. Because only one factor is of interest in this experiment, the statistical method illustrated is often called a one-way ANOVA or simple ANOVA. 3. Data Collection: For each of the five mix designs, three replicates each of cylinders 4 inches in diameter and 8 inches in height are made and cured for 28 days. After 28 days, all 15 specimens are tested for compressive strength using the standard ASTM International test. The compres- sive strength data and summary statistics are provided for each mix design in Table 11. In this example, resource constraints have limited the number of replicates for each mix design to

36 effective experiment Design and Data analysis in transportation research three. (For a discussion on sample size determination based on statistical power requirements, see NCHRP Project 20-45, Volume 2, Chapter 1, “Sample Size Determination.”) 4. Specification of Analysis Technique and Data Analysis: To perform a one-way ANOVA, pre- liminary calculations are carried out to compute the overall mean (y _ P), the sample means (y _ i.), and the sample variances (si 2) given the total sample size (nT = 15) as shown in Table 11. The basic strategy for ANOVA is to compare the variance between levels or groups—specifically, the variation between sample means—to the variance within levels. This comparison is used to determine if the levels explain a significant portion of the variance. (Details for perform- ing a one-way ANOVA are given in NCHRP Project 20-45, Volume 2, Chapter 4, Section A, “Analysis of Variance Methodology.”) ANOVA is based on partitioning of the total sum of squares (TSS, a measure of overall variability) into within-level and between-levels components. The TSS is defined as the sum of the squares of the differences of each observation (yij) from the overall mean (y _ P). The TSS, between-levels sum of squares (SSB), and within-level sum of squares (SSE) are computed as follows. TSS y y SSB y y ij i j i = −( ) = = −( ) ∑ .. , . .. . 2 2 4839620 90 = = −( ) = ∑ 4331513 60 508107 30 2 . . , . , i j ij i i j SSE y y∑ The next step is to compute the between-levels mean square (MSB) and within-levels mean square (MSE) based on respective degrees of freedom (df). The total degrees of freedom (dfT), between-levels degrees of freedom (dfB), and within-levels degrees of freedom (dfE) for one- way ANOVA are computed as follows: df n df t df n t T T B E T = − = − = = − = − = = − = − = 1 15 1 14 1 5 1 4 15 5 10 where nT = the total sample size and t = the total number of levels or groups. The next step of the ANOVA procedure is to compute the F-statistic. The F-statistic is the ratio of two variances: the variance due to interaction between the levels, and the variance due to differences within the levels. Under the null hypothesis, the between-levels mean square (MSB) and within-levels mean square (MSE) provide two independent estimates of the variance. If the means for different levels of mix design are truly different from each other, the MSB will tend Replicate Mix Design A B C D E 1 y11 = 5416 y21 = 5292 y31 = 4097 y41 = 5056 y51 = 4165 2 y12 = 5125 y22 = 4779 y32 = 3695 y42 = 5216 y52 = 3849 3 y13 = 4847 y23 = 4824 y33 = 4109 y43 = 5235 y53 = 4089 Mean y– 1. = 5129 y– 2. = 4965 y– 3. = 3967 y– 4. = 5169 y– 5. = 4034 Standard deviation s1 = 284.52 s2 = 284.08 s3 = 235.64 s4 = 98.32 s5 = 164.94 Overall mean y–.. = 4653 Table 11. Concrete compressive strength (psi) after 28 days.

examples of effective experiment Design and Data analysis in transportation research 37 to be larger than the MSE, such that it will be more likely to reject the null hypothesis. For this example, the calculations for MSB, MSE, and F are as follows: MSB SSB df MSE SSE df F M B E = = = = = 1082878 40 50810 70 . . SB MSE = 21 31. If there are no effects due to level, the F-statistic will tend to be smaller. If there are effects due to level, the F-statistic will tend to be larger, as is the case in this example. ANOVA computations usually are summarized in the form of a table. Table 12 summarizes the computations for this example. The final step is to determine Fcrit from the F-distribution table (e.g., NCHRP Project 20-45, Volume 2, Appendix C, Table C-5) with t - 1 (5 - 1 = 4) degrees of freedom for the numerator and nT - t (15 - 5 = 10) degrees of freedom for the denominator. For a significance level of a = 0.01, Fcrit is found (in Table C-5) to be 5.99. Given that F > Fcrit (21.31 > 5.99), the null hypothesis that all mix designs have equal compressive strength is rejected, supporting the conclusion that at least two mix designs are different from each other in their mean effect. Table 12 also shows the p-value calculated using a computer program. The p-value is the probability that a sample would result in the given statistic value if the null hypothesis were true. The p-value of 0.0000698408 is well below the chosen significance level of 0.01. 5. Interpreting the Results: The ANOVA results in rejection of the null hypothesis at a = 0.01. That is, the mean values are judged to be statistically different. However, the ANOVA result does not indicate where the difference lies. For example, does the compressive strength of mix design A differ from that of mix design C or D? To carry out such multiple mean comparisons, the analyst must control the experiment-wise error rate (EER) by employing more conservative methods such as Tukey’s test, Bonferroni’s test, or Scheffe’s test, as appropriate. (Details for ANOVA are given in NCHRP Project 20-45, Volume 2, Chapter 4, Section A, “Analysis of Variance Methodology.”) The coefficient of determination (R2) provides a rough indication of how well the statistical model fits the data. For this example, R2 is calculated as follows: R SSB TSS 2 4331513 60 4839620 90 0 90= = = . . . For this example, R2 indicates that the one-way ANOVA classification model accounts for 90% of the total variation in the data. In the controlled laboratory experiment demonstrated in this example, R2 = 0.90 indicates a fairly acceptable fit of the statistical model to the data. 6. Conclusion and Discussion: This example illustrates a simple one-way ANOVA where infer- ence regarding parameters (mean values) from more than two populations or treatments was Source Sum of Squares (SS) Degrees of Freedom (df) Mean Square (MS) F Probability > F (Significance) Between 4331513.60 4 1082878.40 21.31 0.0000698408 Within 508107.30 10 50810.70 Total 4839620.90 14 Table 12. ANOVA results.

38 effective experiment Design and Data analysis in transportation research desired. The focus of computations was the construction of the ANOVA table. Before pro- ceeding with ANOVA, however, an analyst must verify that the assumptions of common vari- ance and data normality are satisfied within each group/level. The results do not establish the cause of difference in compressive strength between mix designs in any way. The experimental setup and analytical procedure shown in this example may be used to test other properties of mix designs such as flexure strength. If another factor (for example, water/cement ratio with levels low or high) is added to the analysis, the classification will become a two-way ANOVA. (In this report, two-way ANOVA is demonstrated in Example 11.) Notice that the equations shown in Example 8 may only be used for one-way ANOVA for balanced designs, meaning that in this experiment there are equal numbers of replicates for each level within a factor. (For a discussion of computations on unbalanced designs and multifactor designs, see NCHRP Project 20-45.) 7. Applications in Other Areas of Transportation Research: Examples of applications of one-way ANOVA in other areas of transportation research include: • Traffic Operations—to determine the effect of various traffic calming devices on average speeds in residential areas. • Traffic Operations/Safety—to study the effect of weather conditions on accidents in a given time period. • Work Zones—to compare the effect of different placements of work zone signs on reduction in highway speeds at some downstream point. • Materials—to investigate the effect of recycled aggregates on compressive and flexural strength of concrete. Example 9: Materials; Simple Analysis of Variance (ANOVA) Area: Materials Method of Analysis: Simple analysis of variance (ANOVA) comparing more than two mean values and using the F-test for equality of means 1. Research Question/Problem Statement: To illustrate how increasingly detailed analysis may be appropriate, Example 9 is an extension of the two-sample comparison presented in Exam- ple 7. As a part of dispute resolution during quality control and quality assurance, let’s say the highway agency engineer from Example 7 decides to reconfirm the contractor’s test results for asphalt content. The agency hires an independent consultant to verify both the contractor- and agency-measured asphalt contents. It now becomes necessary to compare more than two mean values. A simple one-way analysis of variance (ANOVA) can be used to analyze the asphalt contents measured by three different parties. Question/Issue Extend a comparison of two mean values to compare three (or more) mean values. Specifically, use data collected by several (>2) different parties to see if the results (mean values) are the same. Formally, test the following null (Ho) and alternative (Ha) hypotheses, which can be stated as follows: Ho: There is no difference in asphalt content among three different parties: H m m mo contractor agency: = =( )consultant Ha: At least one of the parties has a different measured asphalt content.

examples of effective experiment Design and Data analysis in transportation research 39 2. Identification and Description of Variables: The independent consultant runs 12 additional asphalt content tests by taking independent samples from the same pavement section as the agency and contractor. The question is whether it is likely that the tests came from the same population, based on their variability. 3. Data Collection: The descriptive statistics (mean, standard deviation, and sample size) for the asphalt content data collected by the three parties are shown in Table 13. Notice that 12 measurements each have been taken by the contractor and the independent consultant, while the agency has only taken six measurements. The data for the contractor and the agency are the same as presented in Example 7. For brevity, the consultant’s raw observations are not repeated here. The mean value and standard deviation for the consultant’s data are calculated using the same formulas and equations that were used in Example 7. 4. Specification of Analysis Technique and Data Analysis: The agency engineer can use one-way ANOVA to resolve this question. (Details for one-way ANOVA are available in NCHRP Project 20-45, Volume 2, Chapter 4, Section A, “Analysis of Variance Methodology.”) The objective of the ANOVA is to determine whether the variance observed in the depen- dent variable (in this case, asphalt content) is due to the differences among the samples (different from one party to another) or due to the differences within the samples. ANOVA is basically an extension of two-sample comparisons to cases when three or more samples are being compared. More formally, the technician is testing to see whether the between- sample variability is large relative to the within-sample variability, as stated in the formal hypothesis. This type of comparison also may be referred to as between-groups versus within-groups variance. Rejection of the null hypothesis (that the mean values are the same) gives the engineer some information concerning differences among the population means; however, it does not indicate which means actually differ from each other. Rejection of the null hypothesis tells the engineer that differences exist, but it does not specify that X _ 1 differs from X _ 2 or from X _ 3. To control the experiment-wise error rate (EER) for multiple mean comparisons, a con- servative test—Tukey’s procedure for unplanned comparisons—can be used for unplanned comparisons. (Information about Tukey’s procedure can be found in almost any good statistics textbook, such as those by Freund and Wilson [2003] and Kutner et al. [2005].) The F-statistic calculated for determining the effect of who (agency, contractor, or consultant) measured Party Type Asphalt Content Percent Contractor 1 1 1 X s n = 6.1 = 0.254 = 12 Agency 2 2 2 X s n = 5.7 = 0.303 = 6 Consultant 3 3 3 X s n = 5.12 = 0.186 = 12 Table 13. Asphalt content data summary.

40 effective experiment Design and Data analysis in transportation research the asphalt content is given in Table 14. (See Example 8 for a more detailed discussion of the calculations necessary to create Table 14.) Although the ANOVA results reveal whether there are overall differences, it is always good practice to visually examine the data. For example, Figure 9 shows the mean and associated 95% confidence intervals (CI) of the mean asphalt content measured by each of the three parties involved in the testing. 5. Interpreting the Results: A simple one-way ANOVA is conducted to determine whether there is a difference in mean asphalt content as measured by the three different parties. The analysis shows that the F-statistic is significant (p-value < 0.05), meaning that at least two of the means are significantly different from each other. The engineer can use Tukey’s procedure for com- parisons of multiple means, or he or she can observe the plotted 95% confidence intervals to figure out which means are actually (and significantly) different from each other (see Figure 9). Because the confidence intervals overlap, the results show that the asphalt content measured by the contractor and the agency are somewhat different. (These same conclusions were obtained in Example 7.) However, the mean asphalt content obtained by the consultant is significantly different from (and lower than) that obtained by both of the other parties. This is evident because the confidence interval for the consultant doesn’t overlap with the confidence interval of either of the other two parties. Source Sum of Squares (SS) Degrees of Freedom (df) Mean Square (MS) F Significance Between groups 5.6 2 2.8 49.1 0.000 Within groups 1.5 27 0.06 Total 7.2 29 Table 14. ANOVA results. Figure 9. Mean and confidence intervals for asphalt content data.

examples of effective experiment Design and Data analysis in transportation research 41 6. Conclusion and Discussion: This example uses a simple one-way ANOVA to compare the mean values of three sets of results using data drawn from the same test section. The error bar plots for data from the three different parties visually illustrate the statistical differences in the multiple means. However, the F-test for multiple means should be used to formally test the hypothesis of the equality of means. The interpretation of results will be misleading if the variances of populations being compared for their mean difference are not equal. Based on the comparison of the three means, it can be concluded that the construction material in this example may not have consistent properties, as indicated by the results from the independent consultant. 7. Applications in Other Areas of Transportation Research: Simple one-way ANOVA is often used when more than two means must be compared. Examples of applications in other areas of transportation research include: • Traffic Safety/Operations—to evaluate the effect of intersection type on the average number of accidents per month. Three or more types of intersections (e.g., signalized, non-signalized, and rotary) could be selected for study in an urban area having similar traffic volumes and vehicle mix. • Pavement Engineering – to investigate the effect of hot-mix asphalt (HMA) layer thickness on fatigue cracking after 20 years of service life. Three HMA layer thicknesses (5 inches, 6 inches, and 7 inches) are to be involved in this study, and other factors (i.e., traffic, climate, and subbase/base thicknesses and subgrade types) need to be similar. – to determine the effect of climatic conditions on rutting performance of flexible pavements. Three or more climatic conditions (e.g., wet-freeze, wet-no-freeze, dry-freeze, and dry-no-freeze) need to be considered while other factors (i.e., traffic, HMA, and subbase/ base thicknesses and subgrade types) need to be similar. Example 10: Pavements; Simple Analysis of Variance (ANOVA) Area: Pavements Method of Analysis: Simple analysis of variance (ANOVA) comparing the mean values of more than two samples and using the F-test 1. Research Question/Problem Statement: The aggregate coefficient of thermal expansion (CTE) in Portland cement concrete (PCC) is a critical factor affecting thermal behavior of PCC slabs in concrete pavements. In addition, the interaction between slab curling (caused by the thermal gradient) and axle loads is assumed to be a critical factor for concrete pavement performance in terms of cracking. To verify the effect of aggregate CTE on slab cracking, a pavement engineer wants to conduct a simple observational study by collecting field pave- ment performance data on three different types of pavement. For this example, three types of aggregate (limestone, dolomite, and gravel) are being used in concrete pavement construction and yield the following CTEs: • 4 in./in. per °F • 5 in./in. per °F • 6.5 in./in. per °F It is necessary to compare more than two mean values. A simple one-way ANOVA is used to analyze the observed slab cracking performance by the three different concrete mixes with different aggregate types based on geology (limestone, dolomite, and gravel). All other factors that might cause variation in cracking are assumed to be held constant.

42 effective experiment Design and Data analysis in transportation research 2. Identification and Description of Variables: The engineer identifies 1-mile sections of uni- form pavement within the state highway network with similar attributes (aggregate type, slab thickness, joint spacing, traffic, and climate). Field performance, in terms of the observed percentage of slab cracked (“% slab cracked,” i.e., how cracked is each slab) for each pavement section after about 20 years of service, is considered in the analysis. The available pavement data are grouped (stratified) based on the aggregate type (CTE value). The % slab cracked after 20 years is the dependent variable, while CTE of aggregates is the independent variable. The question is whether pavement sections having different types of aggregate (CTE values) exhibit similar performance based on their variability. 3. Data Collection: From the data stratified by CTE, the engineer randomly selects nine pave- ment sections within each CTE category (i.e., 4, 5, and 6.5 in./in. per °F). The sample size is based on the statistical power (1-b) requirements. (For a discussion on sample size determina- tion based on statistical power requirements, see NCHRP Project 20-45, Volume 2, Chapter 1, “Sample Size Determination.”) The descriptive statistics for the data, organized by three CTE categories, are shown in Table 15. The engineer considers pavement performance data for 9 pavement sections in each CTE category. 4. Specification of Analysis Technique and Data Analysis: Because the engineer is concerned with the comparison of more than two mean values, the easiest way to make the statistical comparison is to perform a one-way ANOVA (see NCHRP Project 20-45, Volume 2, Chapter 4). The comparison will help to determine whether the between-section variability is large relative to the within-section variability. More formally, the following hypotheses are tested: HO: All mean values are equal (i.e., m1 = m2 = m3). HA: At least one of the means is different from the rest. Although rejection of the null hypothesis gives the engineer some information concerning difference among the population means, it doesn’t tell the engineer anything about how the means differ from each other. For example, does m1 differ from m2 or m3? To control the experiment-wise error rate (EER) for multiple mean comparisons, a conservative test— Tukey’s procedure for unplanned comparisons—can be used. (Information about Tukey’s procedure can be found in almost any good statistics textbook, such as those by Freund and Wilson [2003] and Kutner et al. [2005].)The F-statistic calculated for determining the effect of CTE on % slab cracked after 20 years is shown in Table 16. Question/Issue Compare the means of more than two samples. Specifically, is the cracking perfor- mance of concrete pavements designed using more than two different types of aggregates the same? Stated a bit differently, is the performance of three different types of concrete pavement statistically different (are the mean performance measures different)? CTE (in./in. per oF) % Slab Cracked After 20 Years 4 1 1 137, 4.8, 9X s n= = = 5 2 2 253.7, 6.1, 9X s n= = = 6.5 3 3 372.5, 6.3, 9X s n= = = Table 15. Pavement performance data.

examples of effective experiment Design and Data analysis in transportation research 43 The data in Table 16 have been produced by considering the original data and following the procedures presented in earlier examples. The emphasis in this example is on understanding what the table of results provides the researcher. Also in this example, the test for homogeneity of variances (Levene test) shows no significant difference among the standard deviations of % slab cracked for different CTE values. Figure 10 presents the mean and associated 95% confi- dence intervals of the average % slab cracked (also called the mean and error bars) measured for the three CTE categories considered. 5. Interpreting the Results: A simple one-way ANOVA is conducted to determine if there is a difference among the mean values for % slab cracked for different CTE values. The analysis shows that the F-statistic is significant (p-value < 0.05), meaning that at least two of the means are statistically significantly different from each other. To gain more insight, the engineer can use Tukey’s procedure to specifically compare the mean values, or the engineer may simply observe the plotted 95% confidence intervals to ascertain which means are significantly different from each other (see Figure 10). The plotted results show that the mean % slab cracked varies significantly for different CTE values—there is no overlap between the different mean/error bars. Figure 10 also shows that the mean % slab cracked is significantly higher for pavement sections having a higher CTE value. (For more information about Tukey’s procedure, see NCHRP Project 20-45, Volume 2, Chapter 4.) 6. Conclusion and Discussion: In this example, simple one-way ANOVA is used to assess the effect of CTE on cracking performance of rigid pavements. The F-test for multiple means is used to formally test the (null) hypothesis of mean equality. The confidence interval plots for data from pavements having three different CTE values visually illustrate the statistical differ- ences in the three means. The interpretation of results will be misleading if the variances of Source Sum of Squares (SS) Degrees of Freedom (df) Mean Square (MS) F Significance Between groups 5652.7 2 0.0002826.3 84.1 Within groups 806.9 24 33.6 Total 6459.6 26 Table 16. ANOVA results. Figure 10. Error bars for % slab cracked with different CTE.

44 effective experiment Design and Data analysis in transportation research populations being compared for their mean difference are not equal or if a proper multiple mean comparisons procedure is not adopted. Based on the comparison of the three means in this example, the engineer can conclude that the pavement slabs having aggregates with a higher CTE value will exhibit more cracking than those with lower CTE values, given that all other variables (e.g., climate effects) remain constant. 7. Applications in Other Areas of Transportation Research: Simple one-way ANOVA is widely used and can be employed whenever multiple means within a factor are to be compared with one another. Potential applications in other areas of transportation research include: • Traffic Operations—to evaluate the effect of commuting time on level of service (LOS) of an urban highway. Mean travel times for three periods (e.g., morning, afternoon, and evening) could be selected for specified highway sections to collect the traffic volume and headway data in all lanes. • Traffic Safety—to determine the effect of shoulder width on accident rates on rural highways. More than two shoulder widths (e.g., 0 feet, 6 feet, 9 feet, and 12 feet) should be selected in this study. • Pavement Engineering—to investigate the impact of air void content on flexible pavement fatigue performance. Pavement sections having three or more air void contents (e.g., 3%, 5%, and 7%) in the surface HMA layer could be selected to compare their average fatigue cracking performance after the same period of service (e.g., 15 years). • Materials—to study the effect of aggregate gradation on the rutting performance of flexible pavements. Three types of aggregate gradations (fine, intermediate, and coarse) could be adopted in the laboratory to make different HMA mix samples. Performance testing could be conducted in the laboratory to measure rut depths for a given number of load cycles. Example 11: Pavements; Factorial Design (ANOVA Approach) Area: Pavements Method of Analysis: Factorial design (an ANOVA approach used to explore the effects of varying more than one independent variable) 1. Research Question/Problem Statement: Extending the information from Example 10 (a simple ANOVA example for pavements), the pavement engineer has verified that the coefficient of thermal expansion (CTE) in Portland cement concrete (PCC) is a critical factor affecting thermal behavior of PCC slabs in concrete pavements and significantly affects concrete pave- ment performance in terms of cracking. The engineer now wants to investigate the effects of another factor, joint spacing (JS), in addition to CTE. To study the combined effects of PCC CTE and JS on slab cracking, the engineer needs to conduct a factorial design study by collect- ing field pavement performance data. As before, three CTEs will be considered: • 4 in./in. per °F, • 5 in./in. per °F, and • 6.5 in./in. per °F. Now, three different joint spacings (12 ft, 16 ft, and 20 ft) also will be considered. For this example, it is necessary to compare multiple means within each factor (main effects) and the interaction between the two factors (interactive effects). The statistical technique involved is called a multifactorial two-way ANOVA. 2. Identification and Description of Variables: The engineer identifies uniform 1-mile pavement sections within the state highway network with similar attributes (e.g., slab thickness, traffic, and climate). The field performance, in terms of observed percentage of each slab cracked (% slab cracked) after about 20 years of service for each pavement section, is considered the

examples of effective experiment Design and Data analysis in transportation research 45 dependent (or response) variable in the analysis. The available pavement data are stratified based on CTE and JS. CTE and JS are considered the independent variables. The question is whether pavement sections having different CTE and JS exhibit similar performance based on their variability. Question/Issue Use collected data to determine the effects of varying more than one independent variable on some measured outcome. In this example, compare the cracking perfor- mance of concrete pavements considering two independent variables: (1) coefficients of thermal expansion (CTE) as measured using more than two types of aggregate and (2) differing joint spacing (JS). More formally, the hypotheses can be stated as follows: Ho : ai = 0, No difference in % slabs cracked for different CTE values. Ho : gj = 0, No difference in % slabs cracked for different JS values. Ho : (ag)ij = 0, for all i and j, No difference in % slabs cracked for different CTE and JS combinations. 3. Data Collection: The descriptive statistics for % slab cracked data by three CTE and three JS categories are shown in Table 17. From the data stratified by CTE and JS, the engineer has randomly selected three pavement sections within each of nine combinations of CTE values. (In other words, for each of the nine pavement sections from Example 10, the engineer has selected three JS.) 4. Specification of Analysis Technique and Data Analysis: The engineer can use two-way ANOVA test statistics to determine whether the between-section variability is large relative to the within-section variability for each factor to test the following null hypotheses: • Ho : ai = 0 • Ho : gj = 0 • Ho : (ag)ij = 0 As mentioned before, although rejection of the null hypothesis does give the engineer some information concerning differences among the population means (i.e., there are differences among them), it does not clarify which means differ from each other. For example, does µ1 differ from µ2 or µ3? To control the experiment-wise error rate (EER) for the comparison of multiple means, a conservative test—Tukey’s procedure for an unplanned comparison—can be used. (Information about two-way ANOVA is available in NCHRP Project 20-45, Volume 2, CTE (in/in per oF) Marginal µ & σ 4 5 6.5 Joint spacing (ft) 12 1,1 = 32.4 s1,1 = 0.1 1,2 = 46.8 s1,2 = 1.8 1,3 = 65.3 s 1,3 = 3.2 1,. = 48.2 s1,. = 14.4 16 2,1 = 36.0 s2,1 = 2.4 2,2 = 54 s2,2 = 2.9 2,3 = 73 s2,3 = 1.1 2,. = 54.3 s2,. = 16.1 20 3,1 = 42.7 s3,1 = 2.4 3,2 = 60.3 s3,2 = 0.5 3,3 = 79.1 s3,3 = 2.0 3,. = 60.7 s3,. = 15.9 Marginal µ & σ .,1 = 37.0 x– x– x– x– x– x– x– x– x– x– x– x– x– x– x– x– s.,1 = 4.8 .,2 = 53.7 s.,2 = 6.1 .,3 = 72.5 s.,3 = 6.3 .,. = 54.4 s.,. = 15.8 Note: n = 3 in each cell; values are cell means and standard deviations. Table 17. Summary of cracking data.

46 effective experiment Design and Data analysis in transportation research Chapter 4. Information about Tukey’s procedure can be found in almost any good statistics textbook, such as those by Freund and Wilson [2003] and Kutner et al. [2005].) The results of the two-way ANOVA are shown in Table 18. From the first line it can be seen that both of the main effects, CTE and JS, are significant in explaining cracking behavior (i.e., both p-values < 0.05). However, the interaction (CTE × JS) is not significant (i.e., the p-value is 0.999, much greater than 0.05). Also, the test for homogeneity of variances (Levene statistic) shows that there is no significant difference among the standard deviations of % slab cracked for different CTE and JS values. Figure 11 illustrates the main and interactive effects of CTE and JS on % slabs cracked. 5. Interpreting the Results: A two-way (multifactorial) ANOVA is conducted to determine if difference exists among the mean values for “% slab cracked” for different CTE and JS values. The analysis shows that the main effects of both CTE and JS are significant, while the inter- action effect is insignificant (p-value > 0.05). These results show that when CTE and JS are considered jointly, they significantly impact the slab cracking separately. Given these results, the conclusions from the results will be based on the main effects alone without considering interaction effects. In fact, if the interaction effect had been significant, the conclusions would be based on them. To gain more insight, the engineer can use Tukey’s procedure to compare specific multiple means within each factor, or the engineer can simply observe the plotted means in Figure 11 to ascertain which means are significantly different from each other. The plotted results show that the mean % slab cracked varies significantly for different CTE and JS values; that is, the CTE seems to be more influential than JS. All lines are almost parallel to Source Sum of Squares (SS) Degrees of Freedom (df) Mean Square (MS) F Significance CTE 5677.74 2 2838.87 657.16 0.000 JS 703.26 2 351.63 81.40 0.000 CTE × JS 0.12 4 0.03 0.007 0.999 Residual/error 77.76 18 4.32 Total 6458.88 26 Table 18. ANOVA results. M ea n % s la bs c ra ck ed 6.55.04.0 75 70 65 60 55 50 45 40 35 201612 CTE JS Main Effects Plot (data means) for Cracking Joint Spacing (ft) M ea n % s la bs c ra ck ed 201612 80 70 60 50 40 30 CTE 6.5 4.0 5.0 Interaction Plot (data means) for Cracking Figure 11. Main and interaction effects of CTE and JS on slab cracking.

examples of effective experiment Design and Data analysis in transportation research 47 each other when plotted for both factors together, showing no interactive effects between the levels of two factors. 6. Conclusion and Discussion: The two-way ANOVA can be used to verify the combined effects of CTE and JS on cracking performance of rigid pavements. The marginal mean plot for cracking having three different CTE and JS levels visually illustrates the differences in the multiple means. The plot of cell means for cracking within the levels of each factor can indicate the presence of interactive effect between two factors (in this example, CTE and JS). However, the F-test for multiple means should be used to formally test the hypothesis of mean equality. Finally, based on the comparison of three means within each factor (CTE and JS), the engineer can conclude that the pavement slabs having aggregates with higher CTE and JS values will exhibit more cracking than those with lower CTE and JS values. In this example, the effect of CTE on concrete pavement cracking seems to be more critical than that of JS. 7. Applications in Other Areas of Transportation Research: Multifactorial designs can be used when more than one factor is considered in a study. Possible applications of these methods can extend to all transportation-related areas, including: • Pavement Engineering – to determine the effects of base type and base thickness on pavement performance of flexible pavements. Two or more levels can be considered within each factor; for exam- ple, two base types (aggregate and asphalt-treated bases) and three base thicknesses (8 inches, 12 inches, and 18 inches). – to investigate the impact of pavement surface conditions and vehicle type on fuel con- sumption. The researcher can select pavement sections with three levels of ride quality (smooth, rough, and very rough) and three types of vehicles (cars, vans, and trucks). The fuel consumptions can be measured for each vehicle type on all surface conditions to determine their impact. • Materials – to study the effects of aggregate gradation and surface on tensile strength of hot-mix asphalt (HMA). The engineer can evaluate two levels of gradation (fine and coarse) and two types of aggregate surfaces (smooth and rough). The samples can be prepared for all the combinations of aggregate gradations and surfaces for determination of tensile strength in the laboratory. – to compare the impact of curing and cement types on the compressive strength of concrete mixture. The engineer can design concrete mixes in laboratory utilizing two cement types (Type I & Type III). The concrete samples can be cured in three different ways for 24 hours and 7 days (normal curing, water bath, and room temperature). Example 12: Work Zones; Simple Before-and-After Comparisons Area: Work zones Method of Analysis: Simple before-and-after comparisons (exploring the effect of some treat- ment before it is applied versus after it is applied) 1. Research Question/Problem Statement: The crash rate in work zones has been found to be higher than the crash rate on the same roads when a work zone is not present. For this reason, the speed limit in construction zones often is set lower than the prevailing non-work-zone speed limit. The state DOT decides to implement photo-radar speed enforcement in a work zone to determine if this speed-enforcement technique reduces the average speed of free- flowing vehicles in the traffic stream. They measure the speeds of a sample of free-flowing vehicles prior to installing the photo-radar speed-enforcement equipment in a work zone and

48 effective experiment Design and Data analysis in transportation research then measure the speeds of free-flowing vehicles at the same location after implementing the photo-radar system. Question/Issue Use collected data to determine whether a difference exists between results before and after some treatment is applied. For this example, does a photo-radar speed- enforcement system reduce the speed of free-flowing vehicles in a work zone, and, if so, is the reduction statistically significant? 2. Identification and Description of Variables: The variable to be analyzed is the mean speed of vehicles before and after the implementation of a photo-radar speed-enforcement system in a work zone. 3. Data Collection: The speeds of individual free-flowing vehicles are recorded for 30 minutes on a Tuesday between 10:00 a.m. and 10:30 a.m. before installing the photo-radar system. After the system is installed, the speeds of individual free-flowing vehicles are recorded for 30 minutes on a Tuesday between 10:00 a.m. and 10:30 a.m. The before sample contains 120 observations and the after sample contains 100 observations. 4. Specification of Analysis Technique and Data Analysis: A test of the significance of the difference between two means requires a statement of the hypothesis to be tested (Ho) and a statement of the alternate hypothesis (H1). In this example, these hypotheses can be stated as follows: Ho: There is no difference in the mean speed of free-flowing vehicles before and after the photo-radar speed-enforcement system is displayed. H1: There is a difference in the mean speed of free-flowing vehicles before and after the photo-radar speed-enforcement system is displayed. Because these two samples are independent, a simple t-test is appropriate to test the stated hypotheses. This test requires the following procedure: Step 1. Compute the mean speed (x _ ) for the before sample (x _ b) and the after sample (x _ a) using the following equation: x x n n ni i i n i b a= = = = ∑ 1 120 100; and Results: x _ b = 53.1 mph and x _ a = 50.5 mph. Step 2. Compute the variance (S2) for each sample using the following equation: S x x n i i i n 2 2 1 1 = −( ) − − ∑ where na = 100; x _ a= 50.5 mph; nb = 120; and x _ b = 53.1 mph Results: S x x n b b b b 2 2 1 12 06= −( ) − =∑ . and S x x n a a a a 2 2 1 12 97= −( ) − =∑ . . Step 3. Compute the pooled variance of the two samples using the following equation: S x x x x n n p a a b b b a 2 2 2 2 = −( ) + −( ) + − ∑∑ Results: S2p = 12.472 and Sp = 3.532.

examples of effective experiment Design and Data analysis in transportation research 49 Step 4. Compute the t-statistic using the following equation: t x x S n n n n b a p a b a b = − + Result: t = − ( )( ) + = 53 1 50 5 3 532 100 120 100 120 5 43 . . . . . 5. Interpreting the Results: The results of the sample t-test are obtained by comparing the value of the calculated t-statistic (5.43 in this example) with the value of the t-statistic for the level of confidence desired. For a level of confidence of 95%, the t-statistic must be greater than 1.96 to reject the null hypotheses (Ho) that the use of a photo-radar speed-enforcement sys- tem does not change the speed of free-flowing vehicles. (For more information, see NCHRP Project 20-45, Volume 2, Appendix C, Table C-4.) 6. Conclusion and Discussion: The sample problem illustrates the use of a statistical test to determine whether the difference in the value of the variable of interest between the before conditions and the after conditions is statistically significant. The before condition is without photo-radar speed enforcement; and the after condition is with photo-radar speed enforcement. In this sample problem, the computed t-statistic (5.43) is greater than the critical t-statistic (1.96), so the null hypothesis is rejected. This means the change in the speed of free-flowing vehicles when the photo-radar speed-enforcement system is used is statistically significant. The assumption is made that all other factors that would affect the speed of free-flowing vehicles (e.g., traffic mix, weather, or construction activity) are the same in the before-and-after conditions. This test is robust if the normality assumption does not hold completely; however, it should be checked using box plots. For significant departures from normality and variance equality assumptions, non-parametric tests must be conducted. (For more information, see NCHRP Project 20-45, Volume 2, Chapter 6, Section C and also Example 21). The reliability of the results in this example could be improved by using a control group. As the example has been constructed, there is an assumption that the only thing that changed at this site was the use of photo-radar speed enforcement; that is, it is assumed that all observed differences are attributable to the use of the photo-radar. If other factors—even something as simple as a general decrease in vehicle speeds in the area—might have impacted speed changes, the effect of the photo-radar speed enforcement would have to be adjusted for those other factors. Measurements taken at a control site (ideally identical to the experiment site) during the same time periods could be used to detect background changes and then to adjust the photo-radar effects. Such a situation is explored in Example 13. 7. Applications in Other Areas in Transportation: The before-and-after comparison can be used whenever two independent samples of data are (or can be assumed to be) normally distributed with equal variance. Applications of before-and-after comparison in other areas of transportation research may include: • Traffic Operations – to compare the average delay to vehicles approaching a signalized intersection when a fixed time signal is changed to an actuated signal or a traffic-adaptive signal. – to compare the average number of vehicles entering and leaving a driveway when access is changed from full access to right-in, right-out only. • Traffic Safety – to compare the average number of crashes on a section of road before and after the road is resurfaced. – to compare the average number of speeding citations issued per day when a stationary operation is changed to a mobile operation. • Maintenance—to compare the average number of citizen complaints per day when a change is made in the snow plowing policy.

50 effective experiment Design and Data analysis in transportation research Example 13: Traffic Safety; Complex Before-and-After Comparisons and Controls Area: Traffic safety Method of Analysis: Complex before-and-after comparisons using control groups (examining the effect of some treatment or application with consideration of other factors that may also have an effect) 1. Research Question/Problem Statement: A state safety engineer wants to estimate the effec- tiveness of fluorescent orange warning signs as compared to standard orange signs in work zones on freeways and other multilane highways. Drivers can see fluorescent signs from a longer distance than standard signs, especially in low-visibility conditions, and the extra cost of the fluorescent material is not too high. Work-zone safety is a perennial concern, especially on freeways and multilane highways where speeds and traffic volumes are high. Question/Issue How can background effects be separated from the effects of a treatment or application? Compared to standard orange signs, do fluorescent orange warning signs increase safety in work zones on freeways and multilane highways? 2. Identification and Description of Variables: The engineer quickly concludes that there is a need to collect and analyze safety surrogate measures (e.g., traffic conflicts and late lane changes) rather than collision data. It would take a long time and require experimentation at many work zones before a large sample of collision data could be ready for analysis on this question. Surrogate measures relate to collisions, but they are much more numerous and it is easier to collect a large sample of them in a short time. For a study of traffic safety, surrogate measures might include near-collisions (traffic conflicts), vehicle speeds, or locations of lane changes. In this example, the engineer chooses to use the location of the lane-change maneuver made by drivers in a lane to be closed entering a work zone. This particular surrogate safety measure is a measure of effectiveness (MOE). The hypothesis is that the farther downstream at which a driver makes a lane change out of a lane to be closed—when the highway is still below capacity—the safer the work zone. 3. Data Collection: The engineer establishes site selection criteria and begins examining all active work zones on freeways and multilane highways in the state for possible inclusion in the study. The site selection criteria include items such as an active work zone, a cooperative contractor, no interchanges within the approach area, and the desired lane geometry. Seven work zones meet the criteria and are included in the study. The engineer decides to use a before-and-after (sometimes designated B/A or b/a) experiment design with randomly selected control sites. The latter are sites in the same population as the treatment sites; that is, they meet the same selection criteria but are untreated (i.e., standard warning signs are employed, not the fluorescent orange signs). This is a strong experiment design because it minimizes three common types of bias in experiments: history, maturation, and regression to the mean. History bias exists when changes (e.g., new laws or large weather events) happen at about the same time as the treatment in an experiment, so that the engineer or analyst cannot separate the effect of the treatment from the effects of the other events. Maturation bias exists when gradual changes occur throughout an extended experiment period and cannot be separated from the effects of the treatment. Examples of maturation bias might involve changes like the aging of driver populations or new vehicles with more air bags. History and maturation biases are referred to as specification errors and are described in more detail in NCHRP Project 20-45, Volume 2,

examples of effective experiment Design and Data analysis in transportation research 51 Chapter 1, in the section “Quasi-Experiments.” Regression-to-the-mean bias exists when sites with the highest MOE levels in the before time period are treated. If the MOE level falls in the after period, the analyst can never be sure how much of the fall was due to the treatment and how much was due to natural fluctuations in the values of the MOE back toward its usual mean value. A before-and-after study with randomly selected control sites minimizes these biases because their effects are expected to apply just as much to the treatment sites as to the control sites. In this example, the engineer randomly selects four of the seven work zones to receive fluorescent orange signs. The other three randomly selected work zones received standard orange signs and are the control sites. After the signs have been in place for a few weeks (a common tactic in before-and-after studies to allow regular drivers to get used to the change), the engineer collects data at all seven sites. The location of each vehicle’s lane-change maneuver out of the lane to be closed is measured from video tape recorded for several hours at each site. Table 19 shows the lane-change data at the midpoint between the first warning sign and beginning of the taper. Notice that the same number of vehicles is observed in the before-and- after periods for each type of site. 4. Specification of Analysis Technique and Data Analysis: Depending on their format, data from a before-and-after experiment with control sites may be analyzed several ways. The data in the table lend themselves to analysis with a chi-square test to see whether the distributions between the before-and-after conditions are the same at both the treatment and control sites. (For more information about chi-square testing, see NCHRP Project 20-45, Volume 2, Chapter 6, Section E, “Chi-Square Test for Independence.”) To perform the chi-square test on the data for Example 13, the engineer first computes the expected value in each cell. For the cell corresponding to the before time period for control sites, this value is computed as the row total (3361) times the column total (2738) divided by the grand total (6714): 3361 2738 6714 1371 = vehicles The engineer next computes the chi-square value for each cell using the following equation: χi i i i O E E 2 2 = −( ) where Oi is the number of actual observations in cell i and Ei is the expected number of observations in cell i. For example, the chi-square value in the cell corresponding to the before time period for control sites is (1262 - 1371)2 / 1371 = 8.6. The engineer then sums the chi-square values from all four cells to get 29.1. That sum is then compared to the critical chi-square value for the significance level of 0.025 with 1 degree of freedom (degrees of freedom = number of rows - 1 * number of columns - 1), which is shown on a standard chi-square distribution table to be 5.02 (see NCHRP Project 20-45, Volume 2, Appendix C, Table C-2.) A significance level of 0.025 is not uncommon in such experiments (although 0.05 is a general default value), but it is a standard that is difficult but not impossible to meet. Time Period Number of Vehicles Observed in Lane to be Closed at Midpoint Control Treatment Total Before 1262 2099 3361 After 1476 1877 3353 Total 2738 3976 6714 Table 19. Lane-change data for before-and-after comparison using controls.

52 effective experiment Design and Data analysis in transportation research 5. Interpreting the Results: Because the calculated chi-square value is greater than the critical chi-square value, the engineer concludes that there is a statistically significant difference in the number of vehicles in the lane to be closed at the midpoint between the before-and-after time periods for the treatment sites relative to what would be expected based on the control sites. In other words, there is a difference that is due to the treatment. 6. Conclusion and Discussion: The experiment results show that fluorescent orange signs in work zone approaches like those tested would likely have a safety benefit. Although the engi- neer cannot reasonably estimate the number of collisions that would be avoided by using this treatment, the before-and-after study with control using a safety surrogate measure makes it clear that some collisions will be avoided. The strength of the experiment design with randomly selected control sites means that agencies can have confidence in the results. The consequences of an error in an analysis like this that results in the wrong conclusion can be devastating. If the error leads an agency to use a safety measure more than it should, precious safety funds will be wasted that could be put to better use. If the error leads an agency to use the safety measure less often than it should, money will be spent on measures that do not prevent as many collisions. With safety funds in such short supply, solid analyses that lead to effective decisions on countermeasure deployment are of great importance. A before-and-after experiment with control is difficult to arrange in practice. Such an experiment is practically impossible using collision data, because that would mean leaving some higher collision sites untreated during the experiment. Such experiments are more plausible using surrogate measures like the one described in this example. 7. Applications in Other Areas of Transportation Research: Before-and-after experiments with randomly selected control sites are difficult to arrange in transportation safety and other areas of transportation research. The instinct to apply treatments to the worst sites, rather than randomly—as this method requires—is difficult to overcome. Despite the difficulties, such experiments are sometimes performed in: • Traffic Operations—to test traffic control strategies at a number of different intersections. • Pavement Engineering—to compare new pavement designs and maintenance processes to current designs and practice. • Materials—to compare new materials, mixes, or processes to standard mixtures or processes. Example 14: Work Zones; Trend Analysis Area: Work zones Method of Analysis: Trend analysis (examining, describing, and modeling how something changes over time) 1. Research Question/Problem Statement: Measurements conducted over time often reveal patterns of change called trends. A model may be used to predict some future measurement, or the relative success of a different treatment or policy may be assessed. For example, work/ construction zone safety has been a concern for highway officials, engineers, and planners for many years. Is there a pattern of change? Question/Issue Can a linear model represent change over time? In this particular example, is there a trend over time for motor vehicle crashes in work zones? The problem is to predict values of crash frequency at specific points in time. Although the question is simple, the statistical modeling becomes sophisticated very quickly.

examples of effective experiment Design and Data analysis in transportation research 53 2. Identification and Description of Variables: Highway safety, rather the lack of it, is revealed by the total number of fatalities due to motor vehicle crashes. The percentage of those deaths occurring in work zones reveals a pattern over time (Figure 12). The data points for the graph are calculated using the following equation: WZP a b YEAR u= + + where WZP = work zone percentage of total fatalities, YEAR = calendar year, and u = an error term, as used here. 3. Data Collection: The base data are obtained from the Fatality Analysis Reporting System maintained by the National Highway Traffic Safety Administration (NHTSA), as reported at www.workzonesafety.org. The data are state specific as well as for the country as a whole, and cover a period of 26 years from 1982 through 2007. The numbers of fatalities from motor vehicle crashes in and not in construction/maintenance zones (work zones) are used to compute the percentage of fatalities in work zones for each of the 26 years. 4. Specification of Analysis Techniques and Data Analysis: Ordinary least squares (OLS) regression is used to develop the general model specified above. The discussion in this example focuses on the resulting model and the related statistics. (See also examples 15, 16, and 17 for details on calculations. For more information about OLS regression, see NCHRP Project 20-45, Volume 2, Chapter 4, Section B, “Linear Regression.”) Looking at the data in Figure 12 another way, WZP = -91.523 (-8.34) (0.000) + 0.047(YEAR) (8.51) (0.000) R = 0.867 t-values p-values R2 = 0.751 The trend is significant: the line (trend) shows an increase of 0.047% each year. Generally, this trend shows that work-zone fatalities are increasing as a percentage of total fatalities. 5. Interpreting the Results: This experiment is a good fit and generally shows that work-zone fatalities were an increasing problem over the period 1982 through 2007. This is a trend that highway officials, engineers, and planners would like to change. The analyst is therefore interested in anticipating the trajectory of the trend. Here the trend suggests that things are getting worse. Figure 12. Percentage of all motor vehicle fatalities occurring in work zones.

54 effective experiment Design and Data analysis in transportation research How far might authorities let things go—5%? 10%? 25%? Caution must be exercised when interpreting a trend beyond the limits of the available data. Technically the slope, or b-coefficient, is the trend of the relationship. The a-term from the regression, also called the intercept, is the value of WZP when the independent variable equals zero. The intercept for the trend in this example would technically indicate that the percentage of motor vehicle fatalities in work zones in the year zero would be -91.5%. This is absurd on many levels. There could be no motor vehicles in year zero, and what is a negative percentage of the total? The absurdity of the intercept in this example reveals that trends are limited concepts, limited to a relevant time frame. Figure 12 also suggests that the trend, while valid for the 26 years in aggregate, doesn’t work very well for the last 5 years, during which the percentages are consistently falling, not rising. Something seems to have changed around 2002; perhaps the highway officials, engineers, and planners took action to change the trend, in which case, the trend reversal would be considered a policy success. Finally, some underlying assumptions must be considered. For example, there is an implicit assumption that the types of roads with construction zones are similar from year to year. If this assumption is not correct (e.g., if a greater number of high speed roads, where fatalities may be more likely, are worked on in some years than in others), then interpreting the trend may not make much sense. 6. Conclusion and Discussion: The computation of this dependent variable (the percent of motor-vehicle fatalities occurring in work zones, or MZP) is influenced by changes in the number of work-zone fatalities and the number of non-work-zone fatalities. To some extent, both of these are random variables. Accordingly, it is difficult to distinguish a trend or trend reversal from a short series of possibly random movements in the same direction. Statistically, more observations permit greater confidence in non-randomness. It is also possible that a data series might be recorded that contains regular, non-random movements that are unrelated to a trend. Consider the dependent variable above (MZP), but measured using monthly data instead of annual data. Further, imagine looking at such data for a state in the upper Midwest instead of for the nation as a whole. In this new situation, the WZP might fall off or halt altogether each winter (when construction and maintenance work are minimized), only to rise again in the spring (reflecting renewed work-zone activity). This change is not a trend per se, nor is it random. Rather, it is cyclical. 7. Applications in Other Areas of Transportation Research: Applications of trend analysis models in other areas of transportation research include: • Transportation Safety—to identify trends in traffic crashes (e.g., motor vehicle/deer) over time on some part of the roadway system (e.g., freeways). • Public Transportation—to determine the trend in rail passenger trips over time (e.g., in response to increasing gas prices). • Pavement Engineering—to monitor the number of miles of pavement that is below some service-life threshold over time. • Environment—to monitor the hours of truck idling time in rest areas over time. Example 15: Structures/Bridges; Trend Analysis Area: Structures/bridges Method of Analysis: Trend analysis (examining a trend over time) 1. Research Question/Problem Statement: A state agency wants to monitor trends in the condition of bridge superstructures in order to perform long-term needs assessment for bridge rehabilitation or replacement. Bridge condition rating data will be analyzed for bridge

examples of effective experiment Design and Data analysis in transportation research 55 2. Identification and Description of Variables: Bridge inspection generally entails collection of numerous variables including location information, traffic data, structural elements (type and condition), and functional characteristics. Based on the severity of deterioration and the extent of spread through a bridge component, a condition rating is assigned on a dis- crete scale from 0 (failed) to 9 (excellent). Generally a condition rating of 4 or below indicates deficiency in a structural component. The state agency inspects approximately 300 bridges every year (denominator). The number of superstructures that receive a rating of 4 or below each year (number of events, numerator) also is recorded. The agency is concerned with the change in overall rate (calculated per 100) of structurally deficient bridge superstructures. This rate, which is simply the ratio of the numerator to the denominator, is the indicator (dependent variable) to be examined for trend over a time period of 15 years. Notice that the unit of analysis is the time period and not the individual bridge superstructures. 3. Data Collection: Data are collected for bridges scheduled for inspection each year. It is important to note that the bridge condition rating scale is based on subjective categories, and therefore there may be inherent variability among inspectors in their assignments of rates to bridge superstructures. Also, it is assumed that during the time period for which the trend analysis is conducted, no major changes are introduced in the bridge inspection methods. Sample data provided in Table 20 show the rate (per 100), number of bridges per year that received a score of four or below, and total number of bridges inspected per year. 4. Specification of Analysis Technique and Data Analysis: The data set consists of 15 observa- tions, one for each year. Figure 13 shows a scatter plot of the rate (dependent variable) versus time in years. The scatter plot does not indicate the presence of any outliers. The scatter plot shows a seemingly increasing linear trend in the rate of deficient superstructures over time. No need for data transformation or smoothing is apparent from the examination of the scatter plot in Figure 13. To determine whether the apparent linear trend is statistically significant in this data, ordinary least squares (OLS) regression can be employed. Question/Issue Use collected data to determine if the values that some variables have taken show an increasing trend or a decreasing trend over time. In this example, determine if levels of structural deficiency in bridge superstructures have been increasing or decreasing over time, and determine how rapidly the increase or decrease has occurred. No. Year Rate (per 100) Number of Events (Numerator) Number of Bridges Inspected (Denominator) 1 1990 8.33 25 300 2 1991 8.70 26 299 5 1994 10.54 31 294 11 2000 13.55 42 310 15 2004 14.61 45 308 Table 20. Sample bridge inspection data. superstructures that have been inspected over a period of 15 years. The objective of this study is to examine the overall pattern of change in the indicator variable over time.

56 effective experiment Design and Data analysis in transportation research The linear regression model takes the following form: y x ei o i i= + +β β1 where i = 1, 2, . . . , n (n = 15 in this example), y = dependent variable (rate of structurally deficient bridge superstructures), x = independent variable (time), bo = y-intercept (only provides reference point), b1 = slope (change in unit y for a change in unit x), and ei = residual error. The first step is to estimate the bo and b1 in the regression function. The residual errors (e) are assumed to be independently and identically distributed (i.e., they are mutually independent and have the same probability distribution). b1 and bo can be computed using the following equations: ˆ . ˆ β β 1 1 2 1 0 454= −( ) −( ) −( ) = = = = ∑ ∑ x x y y x x i i i n i i n o y x− =β1 8 396. where y _ is the overall mean of the dependent variable and x _ is the overall mean of the independent variable. The prediction equation for rate of structurally deficient bridge superstructures over time can be written using the following equation: ˆ ˆ ˆ . .y x xo= + = +β β1 8 396 0 454 That is, as time increases by a year, the rate of structurally deficient bridge superstructures increases by 0.454 per 100 bridges. The plot of the regression line is shown in Figure 14. Figure 14 indicates some small variability about the regression line. To conduct hypothesis testing for the regression relationship (Ho: b1 = 0), assessment of this variability and the assumption of normality would be required. (For a discussion on assumptions for residual errors, see NCHRP Project 20-45, Volume 2, Chapter 4.) Like analysis of variance (ANOVA, described in examples 8, 9, and 10), statistical inference is initiated by partitioning the total sum of squares (TSS) into the error sum of squares (SSE) Figure 13. Scatter plot of time versus rate. 7.00 9.00 11.00 13.00 15.00 Time in years Ra te p er 1 00 1 3 5 7 9 11 13 15

examples of effective experiment Design and Data analysis in transportation research 57 and the model sum of squares (SSR). That is, TSS = SSE + SSR. The TSS is defined as the sum of the squares of the difference of each observation from the overall mean. In other words, deviation of observation from overall mean (TSS) = deviation of observation from prediction (SSE) + deviation of prediction from overall mean (SSR). For our example, TSS y y SSR x x i i n i = −( ) = = −( ) = = ∑ 2 1 1 2 2 60 892 57 7 . ˆ .β 90 3 102 1i n SSE TSS SSR = ∑ = − = . Regression analysis computations are usually summarized in a table (see Table 21). The mean squared errors (MSR, MSE) are computed by dividing the sums of squares by corresponding model and error degrees of freedom. For the null hypothesis (Ho: b1 = 0) to be true, the expected value of MSR is equal to the expected value of MSE such that F = MSR/MSE should be a random draw from an F-distribution with 1, n - 2 degrees of freedom. From the regression shown in Table 21, F is computed to be 242.143, and the probability of getting a value larger than the F computed is extremely small. Therefore, the null hypothesis is rejected; that is, the slope is significantly different from zero, and the linearly increasing trend is found to be statistically significant. Notice that a slope of zero implies that knowing a value of the independent variable provides no insight on the value of the dependent variable. 5. Interpreting the Results: The linear regression model does not imply any cause-and-effect relationship between the independent and dependent variables. The y-intercept only provides a reference point, and the relationship need not be linear outside the data range. The 95% confidence interval for b1 is computed as [0.391, 0.517]; that is, the analyst is 95% confident that the true mean increase in the rate of structurally deficient bridge superstructures is between Plot of regression line y = 8.396 + 0.454x R2 = 0.949 7.00 9.00 11.00 13.00 15.00 1 3 5 7 9 11 13 15 Time in years Ra te p er 1 00 Figure 14. Plot of regression line. Source Sum of Squares (SS) Degrees of Freedom (df) Mean Square F Significance Regression 57.790 1 57.790 (MSR) 242.143 8.769e-10 Error 3.102 13 0.239 (MSE) Total 60.892 14 Table 21. Analysis of regression table.

58 effective experiment Design and Data analysis in transportation research 0.391% and 0.517% per year. (For a discussion on computing confidence intervals, see NCHRP Project 20-45, Volume 2, Chapter 4.) The coefficient of determination (R2) provides an indication of the model fit. For this example, R2 is calculated using the following equation: R SSE TSS 2 0 949= = . The R2 indicates that the regression model accounts for 94.9% of the total variation in the (hypothetical) data. It should be noted that such a high value of R2 is almost impossible to attain from analysis of real observational data collected over a long time. Also, distributional assumptions must be checked before proceeding with linear regression, as serious violations may indicate the need for data transformation, use of non-linear regression or non-parametric methods, and so on. 6. Conclusion and Discussion: In this example, simple linear regression has been used to deter- mine the trend in the rate of structurally deficient bridge superstructures in a geographic area. In addition to assessing the overall patterns of change, trend analysis may be performed to: • study the levels of indicators of change (or dependent variables) in different time periods to evaluate the impact of technical advances or policy changes; • compare different geographic areas or different populations with perhaps varying degrees of exposure in absolute and relative terms; and • make projections to monitor progress toward an objective. However, given the dynamic nature of trend data, many of these applications require more sophisticated techniques than simple linear regression. An important aspect of examining trends over time is the accuracy of numerator and denominator data. For example, bridge structures may be examined more than once during the analysis time period, and retrofit measures may be taken at some deficient bridges. Also, the age of structures is not accounted for in this analysis. For the purpose of this example, it is assumed that these (and other similar) effects are negligible and do not confound the data. In real-life application, however, if the analysis time period is very long, it becomes extremely important to account for changes in factors that may have affected the dependent variable(s) and their measurement. An example of the latter could be changes in the volume of heavy trucks using the bridge, changes in maintenance policies, or changes in plowing and salting regimes. 7. Applications in Other Areas of Transportation Research: Trend analysis is carried out in many areas of transportation research, such as: • Transportation Planning/Traffic Operations—to determine the need for capital improve- ments by examining traffic growth over time. • Traffic Safety—to study the trends in overall, fatal, and/or injury crash rates over time in a geographic area. • Pavement Engineering—to assess the long-term performance of pavements under varying loads. • Environment—to monitor the emission levels from commercial traffic over time with growth of industrial areas. Example 16: Transportation Planning; Multiple Regression Analysis Area: Transportation planning Method of Analysis: Multiple regression analysis (testing proposed linear models with more than one independent variable when all variables are continuous)

examples of effective experiment Design and Data analysis in transportation research 59 1. Research Question/Problem Statement: Transportation planners and engineers often work on variations of the classic four-step transportation planning process for estimat- ing travel demand. The first step, trip generation, generally involves developing a model that can be used to predict the number of trips originating or ending in a zone, which is a geographical subdivision of a corridor, city, or region (also referred to as a traffic analysis zone or TAZ). The objective is to develop a statistical relationship (a model) that can be used to explain the variation in a dependent variable based on the variation of one or more independent variables. In this example, ordinary least squares (OLS) regres- sion is used to develop a model between trips generated (the dependent variable) and demographic, socio-economic, and employment variables (independent variables) at the household level. Question/Issue Can a linear relationship (model) be developed between a dependent variable and one or more independent variables? In this application, the dependent variable is the number of trips produced by households. Independent variables include persons, workers, and vehicles in a household, household income, and average age of persons in the household. The basic question is whether the relationship between the dependent (Y) and independent (X) variables can be represented by a linear model using two coefficients (a and b), expressed as follows: Y X= +a b i where a = the intercept and b = the slope of the line. If the relationship being examined involves more than one independent variable, the equa- tion will simply have more terms. In addition, in a more formal presentation, the equation will also include an error term, e, added at the end. 2. Identification and Description of Variables: Data for four-step modeling of travel demand or for calibration of any specific model (e.g., trip generation or trip origins) come from a variety of sources, ranging from the U.S. Census to mail or telephone surveys. The data that are collected will depend, in part, on the specific purpose of the modeling effort. Data appropriate for a trip-generation model typically are collected from some sort of household survey. For the dependent variable in a trip-generation model, data must be collected on trip-making characteristics. These characteristics could include something as simple as the total trips made by a household in a day or involve more complicated break- downs by trip purpose (e.g., work-related trips versus shopping trips) and time of day (e.g., trips made during peak and non-peak hours). The basic issue that must be addressed is to determine the purpose of the proposed model: What is to be estimated or predicted? Weekdays and work trips normally are associated with peak congestion and are often the focus of these models. For the independent variable(s), the analyst must first give some thought to what would be the likely causes for household trips to vary. For example, it makes sense intuitively that household size might be pertinent (i.e., it seems reasonable that more persons in the household would lead to a higher number of household trips). Household members could be divided into workers and non-workers, two variables instead of one. Likewise, other socio-economic characteristics, such as income-related variables, might also make sense as candidate variables for the model. Data are collected on a range of candidate variables, and

60 effective experiment Design and Data analysis in transportation research the analysis process is used to sort through these variables to determine which combination leads to the best model. To be used in ordinary regression modeling, variables need to be continuous; that is, measured ratio or interval scale variables. Nominal data may be incorporated through the use of indicator (dummy) variables. (For more information on continuous variables, see NCHRP Project 20-45, Volume 2, Chapter 1; for more information on dummy variables, see NCHRP Project 20-45, Volume 2, Chapter 4). 3. Data Collection: As noted, data for modeling travel demand often come from surveys designed especially for the modeling effort. Data also may be available from centralized sources such as a state DOT or local metropolitan planning organization (MPO). 4. Specification of Analysis Techniques and Data Analysis: In this example, data for 178 house- holds in a small city in the Midwest have been provided by the state DOT. The data are obtained from surveys of about 15,000 households all across the state. This example uses only a tiny portion of the data set (see Table 22). Based on the data, a fairly obvious relationship is initially hypothesized: more persons in a household (PERS) should produce more person- trips (TRIPS). In its simplest form, the regression model has one dependent variable and one independent variable. The underlying assumption is that variation in the independent variable causes the variation in the dependent variable. For example, the dependent variable might be TRIPSi (the count of total trips made on a typical weekday), and the independent variable might be PERS (the total number of persons, or occupants, in the household). Expressing the relation- ship between TRIPS and PERS for the ith household in a sample of households results in the following hypothesized model: TRIPS PERSi i i= + +a b i ε where a and b are coefficients to be determined by ordinary least squares (OLS) regression analysis and ei is the error term. The difference between the value of TRIPS for any household predicted using the devel- oped equation and the actual observed value of TRIPS for that same household is called the residual. The resulting model is an equation for the best fit straight line (for the given data) where a is the intercept and b is the slope of the line. (For more information about fitted regression and measures of fit see NCHRP Project 20-45, Volume 2, Chapter 4). In Table 22, R is the multiple R, the correlation coefficient in the case of the simplest linear regression involving one variable (also called univariate regression). The R2 (coefficient of determination) may be interpreted as the proportion of the variance of the dependent variable explained by the fitted regression model. The adjusted R2 corrects for the number of independent variables in the equation. A “perfect” R2 of 1.0 could be obtained if one included enough independent variables (e.g., one for each observation), but doing so would hardly be useful. Coefficients t-values (statistics) p-values Measures of Fit a = 3.347 4.626 0.000 R = 0.510 b = 2.001 7.515 0.000 R2 = 0.260 Adjusted R2 = 0.255 Table 22. Regression model statistics.

examples of effective experiment Design and Data analysis in transportation research 61 Restating the now-calibrated model, TRIPS PERS= +4 626 7 515. . i The statistical significance of each coefficient estimate is evaluated with the p-values of calculated t-statistics, provided the errors are normally distributed. The p-values (also known as probability values) generally indicate whether the coefficients are significantly different from zero (which they need to be in order for the model to be useful). More formally stated, a p-value is the probability of a Type I error. In this example, the t- and p-values shown in Table 22 indicate that both a and b are sig- nificantly different from zero at a level of significance greater than the 99.9% confidence level. P-values are generally offered as two-tail (two-sided hypothesis testing) test values in results from most computer packages; one-tail (one-sided) values may sometimes be obtained by dividing the printed p-values by two. (For more information about one-sided versus two- sided hypothesis testing, see NCHRP Project 20-45, Volume 2, Chapter 4.) The R2 may be tested with an F-statistic; in this example, the F was calculated as 56.469 (degrees of freedom = 2, 176) (See NCHRP Project 20-45, Volume 2, Chapter 4). This means that the model explains a significant amount of the variation in the dependent variable. A plot of the estimated model (line) and the actual data are shown in Figure 15. A strict interpretation of this model suggests that a household with zero occupants (PERS = 0) will produce 3.347 trips per day. Clearly, this is not feasible because there can’t be a household of zero persons, which illustrates the kind of problem encountered when a model is extrapolated beyond the range of the data used for the calibration. In other words, a formal test of the intercept (the a) is not always meaningful or appropriate. Extension of the Model to Multivariate Regression: When the list of potential inde- pendent variables is considered, the researcher or analyst might determine that more than one cause for variation in the dependent variable may exist. In the current example, the question of whether there is more than one cause for variation in the number of trips can be considered. 0 1 2 3 4 5 6 7 8 9 10 PERS 0 10 20 30 40 TR IP S Figure 15. Plot of the line for the estimated model.

62 effective experiment Design and Data analysis in transportation research The model just discussed for evaluating the effect of one independent variable is called a uni- variate model. Should the final model for this example be multivariate? Before determining the final model, the analyst may want to consider whether a variable or variables exist that further clarify what has already been modeled (e.g., more persons cause more trips). The variable PERS is a crude measure, made up of workers and non-workers. Most households have one or two workers. It can be shown that a measure of the non-workers in the household is more effective in explaining trips than is total persons; so a new variable, persons minus workers (DEP), is calculated. Next, variables may exist that address entirely different causal relationships. It might be hypothesized that as the number of registered motor vehicles available in the household (VEH) increases, the number of trips will increase. It may also be argued that as household income (INC, measured in thousands of dollars) increases, the number of trips will increase. Finally, it may be argued that as the average age of household occupants (AVEAGE) increases, the number of trips will decrease because retired people generally make fewer trips. Each of these statements is based upon a logical argument (hypothesis). Given these arguments, the hypothesized multivariate model takes the following form: TRIPS DEP VEH INC AVEAGE= + + + + +a b c d ei i i i ε The results from fitting the multivariate model are given in Table 23. Results of the analysis of variance (ANOVA) for the overall model are shown in Table 24. 5. Interpreting the Results: It is common for regression packages to provide some values in scientific notation as shown for the p-values in Table 23. The coefficient d, showing the relationship of TRIPS with INC, is read 1.907 E-05, which in turn is read as 1.907  10-5 or 0.000001907. All coefficients are of the expected sign and significantly different from 0 (at the 0.05 level) except for d. However, testing the intercept makes little sense. (The intercept value would be the number of trips for a household with 0 vehicles, 0 income, 0 average age, and 0 depen- dents, a most unlikely household.) The overall model is significant as shown by the F-ratio and its p-value, meaning that the model explains a significant amount of the variation in Coefficients t-values (statistics) p-values Measures of Fit a = 8.564 6.274 3.57E-09* R = 0.589 b = 0.899 2.832 0.005 R2 = 0.347 c = 1.067 3.360 0.001 adjusted R2 = 0.330 d = 1.907E-05* 1.927 0.056 e = -0.098 -4.808 3.68E-06 *See note about scientific notation in Section 5, Interpreting the Results. Table 23. Results from fitting the multivariate model. ANOVA Sum of Squares (SS) Degrees of Freedom (df) F-ratio p-value Regression 1487.5 4 19.952 3.4E-13 Residual 2795.7 150 Table 24. ANOVA results for the overall model.

examples of effective experiment Design and Data analysis in transportation research 63 the dependent variable. This model should reliably explain 33% of the variance of house- hold trip generation. Caution should be exercised when interpreting the significance of the R2 and the overall model because it is not uncommon to have a significant F-statistic when some of the coefficients in the equation are not significant. The analyst may want to consider recalibrating the model without the income variable because the coefficient d was insignificant. 6. Conclusion and Discussion: Regression, particularly OLS regression, relies on several assumptions about the data, the nature of the relationships, and the results. Data are assumed to be interval or ratio scale. Independent variables generally are assumed to be measured without error, so all error is attributed to the model fit. Furthermore, indepen- dent variables should be independent of one another. This is a serious concern because the presence in the model of related independent variables, called multicollinearity, compro- mises the t-tests and confuses the interpretation of coefficients. Tests of this problem are available in most statistical software packages that include regression. Look for Variance- Inflation Factor (VIF) and/or Tolerance tests; most packages will have one or the other, and some will have both. In the example above where PERS is divided into DEP and workers, knowing any two variables allows the calculation of the third. Including all three variables in the model would be a case of extreme multicollinearity and, logically, would make no sense. In this instance, because one variable is a linear combination of the other two, the calculations required (within the analysis program) to calibrate the model would actually fail. If the independent variables are simply highly correlated, the regression coefficients (at a minimum) may not have intuitive meaning. In general, equations or models with highly correlated independent variables are to be avoided; alternative models that examine one variable or the other, but not both, should be analyzed. It is also important to analyze the error distributions. Several assumptions relate to the errors and their distributions (normality, constant variance, uncorrelated, etc.) In transportation plan- ning, spatial variables and associations might become important; they require more elaborate constructs and often different estimation processes (e.g., Bayesian, Maximum Likelihood). (For more information about errors and error distributions, see NCHRP Project 20-45, Volume 2, Chapter 4.) Other logical considerations also exist. For example, for the measurement units of the different variables, does the magnitude of the result of multiplying the coefficient and the measured variable make sense and/or have a reasonable effect on the predicted magnitude of the dependent variable? Perhaps more importantly, do the independent variables make sense? In this example, does it make sense that changes in the number of vehicles in the household would cause an increase or decrease in the number of trips? These are measures of operational significance that go beyond consideration of statistical significance, but are no less important. 7. Applications in Other Areas of Transportation Research: Regression is a very important technique across many areas of transportation research, including: • Transportation Planning – to include the other half of trip generation, e.g., predicting trip destinations as a function of employment levels by various types (factory, commercial), square footage of shopping center space, and so forth. – to investigate the trip distribution stage of the 4-step model (log transformation of the gravity model). • Public Transportation—to predict loss/liability on subsidized freight rail lines (function of segment ton-miles, maintenance budgets and/or standards, operating speeds, etc.) for self-insurance computations. • Pavement Engineering—to model pavement deterioration (or performance) as a function of easily monitored predictor variables.

64 effective experiment Design and Data analysis in transportation research Example 17: Traffic Operations; Regression Analysis Area: Traffic operations Method of Analysis: Regression analysis (developing a model to predict the values that some variable can take as a function of one or more other variables, when not all variables are assumed to be continuous) 1. Research Question/Problem Statement: An engineer is concerned about false capacity at inter- sections being designed in a specified district. False capacity occurs where a lane is dropped just beyond a signalized intersection. Drivers approaching the intersection and knowing that the lane is going to be dropped shortly afterward avoid the lane. However, engineers estimating the capacity and level of service of the intersection during design have no reliable way to estimate the percentage of traffic that will avoid the lane (the lane distribution). Question/Issue Develop a model that can be used to predict the values that a dependent vari- able can take as a function of changes in the values of the independent variables. In this particular instance, how can engineers make a good estimate of the lane distribution of traffic volume in the case of a lane drop just beyond an intersec- tion? Can a linear model be developed that can be used to predict this distribu- tion based on other variables? The basic question is whether a linear relationship exists between the dependent variable (Y; in this case, the lane distribution percentage) and some independent variable(s) (X). The relationship can be expressed using the following equation: Y X= +a b i where a is the intercept and b is the slope of the line (see NCHRP Project 20-45, Volume 2, Chapter 4, Section B). 2. Identification and Description of Variables: The dependent variable of interest in this example is the volume of traffic in each lane on the approach to a signalized intersection with a lane drop just beyond. The traffic volumes by lane are converted into lane utilization factors (fLU), to be consistent with standard highway capacity techniques. The Highway Capacity Manual defines fLU using the following equation: f v v N LU g g = ( )1 where Vg is the flow rate in a lane group in vehicles per hour, Vg1 is the flow rate in the lane with the highest flow rate of any in the group in vehicles per hour, and N is the number of lanes in the lane group. The engineer thinks that lane utilization might be explained by one or more of 15 different factors, including the type of lane drop, the distance from the intersection to the lane drop, the taper length, and the heavy vehicle percentage. All of the variables are continuous except the type of lane drop. The type of lane drop is used to categorize the sites. 3. Data Collection: The engineer locates 46 lane-drop sites in the area and collects data at these sites by means of video recording. The engineer tapes for up to 3 hours at each site. The data are summarized in 15-minute periods, again to be consistent with standard highway capacity practice. For one type of lane-drop geometry, with two through lanes and an exclusive right- turn lane on the approach to the signalized intersection, the engineer ends up with 88 valid

examples of effective experiment Design and Data analysis in transportation research 65 data points (some sites have provided more than one data point), covering 15 minutes each, to use in equation (model) development. 4. Specification of Analysis Technique and Data Analysis: Multiple (or multivariate) regression is a standard statistical technique to develop predictive equations. (More information on this topic is given in NCHRP Project 20-45, Volume 2, Chapter 4, Section B). The engineer performs five steps to develop the predictive equation. Step 1. The engineer examines plots of each of the 15 candidate variables versus fLU to see if there is a relationship and to see what forms the relationships might take. Step 2. The engineer screens all 15 candidate variables for multicollinearity. (Multicollinearity occurs when two variables are related to each other and essentially contribute the same informa- tion to the prediction.) Multicollinearity can lead to models with poor predicting power and other problems. The engineer examines the variables for multicollinearity by • looking at plots of each of the 15 candidate variables against every other candidate variable; • calculating the correlation coefficient for each of the 15 candidate independent variables against every other candidate variable; and • using more sophisticated tests (such as the variance influence factor) that are available in statistical software. Step 3. The engineer reduces the set of candidate variables to eight. Next, the engineer uses statistical software to select variables and estimate the coefficients for each selected variable, assuming that the regression equation has a linear form. To select variables, the engineer employs forward selection (adding variables one at a time until the equation fit ceases to improve significantly) and backward elimination (starting with all candidate variables in the equation and removing them one by one until the equation fit starts to deteriorate). The equation fit is measured by R2 (for more information, see NCHRP Project 20-45, Volume 2, Chapter 4, Section B, under the heading, “Descriptive Measures of Association Between X and Y”), which shows how well the equation fits the data on a scale from 0 to 1, and other factors provided by statistical software. In this case, forward selection and backward elimination result in an equation with five variables: • Drop: Lane drop type, a 0 or 1 depending on the type; • Left: Left turn status, a 0 or 1 depending on the types of left turns allowed; • Length: The distance from the intersection to the lane drop, in feet ÷ 1000; • Volume: The average lane volume, in vehicles per hour per lane ÷ 1000; and • Sign: The number of signs warning of the lane drop. Notice that the first two variables are discrete variables and had to assume a zero-or-one format to work within the regression model. Each of the five variables has a coefficient that is significantly different from zero at the 95% confidence level, as measured by a t-test. (For more information, see NCHRP Project 20-45, Volume 2, Chapter 4, Section B, “How Are t-statistics Interpreted?”) Step 4. Once an initial model has been developed, the engineer plots the residuals for the tentative equation to see whether the assumed linear form is correct. A residual is the differ- ence, for each observation, between the prediction the equation makes for fLU and the actual value of fLU. In this example, a plot of the predicted value versus the residual for each of the 88 data points shows a fan-like shape, which indicates that the linear form is not appropriate. (NCHRP Project 20-45, Volume 2, Chapter 4, Section B, Figure 6 provides examples of residual plots that are and are not desirable.) The engineer experiments with several other model forms, including non-linear equations that involve transformations of variables, before settling on a lognormal form that provides a good R2 value of 0.73 and a desirable shape for the residual plot.

66 effective experiment Design and Data analysis in transportation research Step 5. Finally, the engineer examines the candidate equation for logic and practicality, asking whether the variables make sense, whether the signs of the variables make sense, and whether the variables can be collected easily by design engineers. Satisfied that the answers to these questions are “yes,” the final equation (model) can be expressed as follows: f Drop Left LLU = − − + +exp . . . .0 539 0 218 0 148 0 178i i i ength Volume Sign+ −( )0 627 0 105. .i i 5. Interpreting the Results: The process described in this example results in a useful equation for estimating the lane utilization in a lane to be dropped, thereby avoiding the estimation of false capacity. The equation has five terms and is non-linear, which will make its use a bit challenging. However, the database is large, the equation fits the data well, and the equation is logical, which should boost the confidence of potential users. If potential users apply the equation within the ranges of the data used for the calibration, the equation should provide good predictions. Applying any model outside the range of the data on which it was calibrated increases the likelihood of an inaccurate prediction. 6. Conclusion and Discussion: Regression is a powerful statistical technique that provides models engineers can use to make predictions in the absence of direct observation. Engineers tempted to use regression techniques should notice from this and other examples that the effort is substantial. Engineers using regression techniques should not skip any of the steps described above, as doing so may result in equations that provide poor predictions to users. Analysts considering developing a regression model to help make needed predictions should not be intimidated by the process. Although there are many pitfalls in developing a regression model, analysts considering making the effort should also consider the alternative: how the prediction will be made in the absence of a model. In the absence of a model, predic- tions of important factors like lane utilization would be made using tradition, opinion, or simple heuristics. With guidance from NCHRP Project 20-45 and other texts, and with good software available to make the calculations, credible regression models often can be developed that perform better than the traditional prediction methods. Because regression models developed by transportation engineers are often reused in later studies by others, the stakes are high. The consequences of a model that makes poor pre- dictions can be severe in terms of suboptimal decisions. Lane utilization models often are employed in traffic studies conducted to analyze new development proposals. A model that under-predicts utilization in a lane to be dropped may mean that the development is turned down due to the anticipated traffic impacts or that the developer has to pay for additional and unnecessary traffic mitigation measures. On the other hand, a model that over-predicts utilization in a lane to be dropped may mean that the development is approved with insufficient traffic mitigation measures in place, resulting in traffic delays, collisions, and the need for later intervention by a public agency. 7. Applications in Other Areas of Transportation Research: Regression is used in almost all areas of transportation research, including: • Transportation Planning—to create equations to predict trip generation and mode split. • Traffic Safety—to create equations to predict the number of collisions expected on a particular section of road. • Pavement Engineering/Materials—to predict long-term wear and condition of pavements. Example 18: Transportation Planning; Logit and Related Analysis Area: Transportation planning Method of Analysis: Logit and related analysis (developing predictive models when the dependent variable is dichotomous—e.g., 0 or 1)

examples of effective experiment Design and Data analysis in transportation research 67 2. Identification and Description of Variables: Considering a typical, traditional urban area in the United States, it is reasonable to argue that the likelihood of taking public transit to work (Y) will be a function of income (X). Generally, more income means less likelihood of taking public transit. This can be modeled using the following equation: Y X ui i i= + +β β1 2 where Xi = family income, Y = 0 if the family uses public transit, and Y = 1 if the family doesn’t use public transit. 3. Data Collection: These data normally are obtained from travel surveys conducted at the local level (e.g., by a metropolitan area or specific city), although the agency that collects the data often is a state DOT. 4. Specification of Analysis Techniques and Data Analysis: In this example the dependent variable is dichotomous and is a linear function of an explanatory variable. Consider the equation E(YiXi) = b1 + b2Xi. Notice that if Pi = probability that Y = 1 (household utilizes transit), then (1 - Pi) = probability that Y = 0 (doesn’t utilize transit). This has been called a linear probability model. Note that within this expression, “i” refers to a household. Thus, Y has the distribution shown in Table 25. Any attempt to estimate this relationship with standard (OLS) regression is saddled with many problems (e.g., non-normality of errors, heteroscedasticity, and the possibility that the predicted Y will be outside the range 0 to 1, to say nothing of pretty terrible R2 values). Question/Issue Can a linear model be developed that can be used to predict the probability that one of two choices will be made? In this example, the question is whether a household will use public transit (or not). Rather than being continuous (as in linear regression), the dependent variable is reduced to two categories, a dichotomous variable (e.g., yes or no, 0 or 1). Although the question is simple, the statistical modeling becomes sophisticated very quickly. 1. Research Question/Problem Statement: Transportation planners often utilize variations of the classic four-step transportation planning process for predicting travel demand. Trip generation, trip distribution, mode split, and trip assignment are used to predict traffic flows under a variety of forecasted changes in networks, population, land use, and controls. Mode split, deciding which mode of transportation a traveler will take, requires predicting mutually exclusive outcomes. For example, will a traveler utilize public transit or drive his or her own car? Table 25. Distribution of Y. Values that Y Takes Probability Meaning/Interpretation 1 Pi Household uses transit 0 1 – Pi Household does not use transit 1.0 Total

68 effective experiment Design and Data analysis in transportation research An alternative formulation for estimating Pi, the cumulative logistic distribution, is expressed by the following equation: Pi Xi = + − +( ) 1 1 1 2ε β β This function can be plotted as a lazy Z-curve where on the left, with low values of X (low household income), the probability starts near 1 and ends at 0 (Figure 16). Notice that, even at 0 income, not all households use transit. The curve is said to be asymptotic to 1 and 0. The value of Pi varies between 1 and 0 in relation to income, X. Manipulating the definition of the cumulative logistic distribution from above, 1 11 2+( ) =− +( )ε β β Xi iP P Pi i Xi+( ) =− +( )ε β β1 2 1 P Pi Xi iε β β− +( ) = −1 2 1 ε β β− +( ) = −1 2 1Xi i i P P and ε β β1 2 1 +( ) = − Xi i i P P The final expression is the ratio of the probability of utilizing public transit divided by the probability of not utilizing public transit. It is called the odds ratio. Next, taking the natural log of both sides (and reversing) results in the following equation: L P P Xi i i i= −   = +ln 1 1 2β β L is called the logit, and this is called a logit model. The left side is the natural log of the odds ratio. Unfortunately, this odds ratio is meaningless for individual households where the prob- ability is either 0 or 1 (utilize or not utilize). If the analyst uses standard OLS regression on this Figure 16. Plot of cumulative logistic distribution showing a lazy Z-curve.

examples of effective experiment Design and Data analysis in transportation research 69 equation, with data for individual households, there is a problem because when Pi happens to equal either 0 or 1 (which is all the time!), the odds ratio will, as a result, equal either 0 or infinity (and the logarithm will be undefined) for all observations. However, by using groups of households the problem can be mitigated. Table 26 presents data based on a survey of 701 households, more than half of which use transit (380). The income data are recorded for intervals; here, interval mid-points (Xj) are shown. The number of households in each income category is tallied (Nj), as is the number of households in each income category that utilizes public transit (nj). It is important to note that while there are more than 700 households (i), the number of observations (categories, j) is only 13. Using these data, for each income bracket, the probability of taking transit can be estimated as follows: P n N j j j  = This equation is an expression of relative frequency (i.e., it expresses the proportion in income bracket “j” using transit). An examination of Table 26 shows clearly that there is progression of these relative frequen- cies, with higher income brackets showing lower relative frequencies, just as was hypothesized. We can calculate the odds ratio for each income bracket listed in Table 26 and estimate the following logit function with OLS regression: L n N n N Xj j j j j j= −       = +ln 1 1 2β β The results of this regression are shown in Table 27. The results also can be expressed as an equation: LogOddsRatio X= −1 037 0 00003863. .  5. Interpreting the Results: This model provides a very good fit. The estimates of the coefficients can be inserted in the original cumulative logistic function to directly estimate the probability of using transit for any given X (income level). Indeed, the logistic graph in Figure 16 is produced with the estimated function. Xj ($) Nj (Households) nj (Utilizing Transit) Pj (Defined Above) $6,000 40 30 0.750 $8,000 55 39 0.709 $10,000 65 43 0.662 $13,000 88 58 0.659 $15,000 118 69 0.585 $20,000 81 44 0.543 $25,000 70 33 0.471 $30,000 62 25 0.403 $35,000 40 16 0.400 $40,000 30 11 0.367 $50,000 22 6 0.273 $60,000 18 4 0.222 $75,000 12 2 0.167 Total: 701 380 Table 26. Data examined by groups of households.

70 effective experiment Design and Data analysis in transportation research 6. Conclusion and Discussion: This approach to estimation is not without further problems. For example, the N within each income bracket needs to be sufficiently large that the relative fre- quency (and therefore the resulting odds ratio) is accurately estimated. Many statisticians would say that a minimum of 25 is reasonable. This approach also is limited by the fact that only one independent variable is used (income). Common sense suggests that the right-hand side of the function could logically be expanded to include more than one predictor variable (more Xs). For example, it could be argued that educational level might act, along with income, to account for the probability of using transit. However, combining predictor variables severely impinges on the categories (the j) used in this OLS regression formulation. To illustrate, assume that five educational categories are used in addition to the 13 income brackets (e.g., Grade 8 or less, high school graduate to Grade 9, some college, BA or BS degree, and graduate degree). For such an OLS regression analysis to work, data would be needed for 5 × 13, or 65 categories. Ideally, other travel modes should also be considered. In the example developed here, only transit and not-transit are considered. In some locations it is entirely reasonable to examine private auto versus bus versus bicycle versus subway versus light rail (involving five modes, not just two). This notion of a polychotomous logistic regression is possible. However, five modes cannot be estimated with the OLS regression technique employed above. The logit above is a variant of the binomial distribution and the polychotomous logistic model is a variant of the multi- nomial distribution (see NCHRP Project 20-45, Volume 2, Chapter 5). Estimation of these more advanced models requires maximum likelihood methods (as described in NCHRP Project 20-45, Volume 2, Chapter 5). Other model variants are based upon other cumulative probability distributions. For exam- ple, there is the probit model, in which the normal cumulative density function is used. The probit model is very similar to the logit model, but it is more difficult to estimate. 7. Applications in Other Areas of Transportation Research: Applications of logit and related models abound within transportation studies. In any situation in which human behavior is relegated to discrete choices, the category of models may be applied. Examples in other areas of transportation research include: • Transportation Planning—to model any “choice” issue, such as shopping destination choices. • Traffic Safety—to model dichotomous responses (e.g., did a motorist slow down or not) in response to traffic control devices. • Highway Design—to model public reactions to proposed design solutions (e.g., support or not support proposed road diets, installation of roundabouts, or use of traffic calming techniques). Example 19: Public Transit; Survey Design and Analysis Area: Public transit Method of Analysis: Survey design and analysis (organizing survey data for statistical analysis) Coefficients t-values (statistics) p-values Measures of “Fit” 1 = 1.037 12.156 0.000 R = 0.980 2 = -0.00003863 β β -16.407 0.000 R2 = 0.961 adjusted R2 = 0.957 Table 27. Results of OLS regression.

examples of effective experiment Design and Data analysis in transportation research 71 2. Identification and Description of Variables: Two types of variables are needed for this analysis. The first is data on the characteristics of the riders, such as gender, age, and access to an automobile. These data are discrete variables. The second is data on the riders’ stated responses to proposed changes in the fare or service characteristics. These data also are treated as discrete variables. Although some, like the fare, could theoretically be continuous, they are normally expressed in discrete increments (e.g., $1.00, $1.25, $1.50). 3. Data Collection: These data are normally collected by agencies conducting a survey of the transit users. The initial step in the experiment design is to choose the variables to be collected for each of these two data sets. The second step is to determine how to categorize the data. Both steps are generally based on past experience and common sense. Some of the variables used to describe the characteristics of the transit user are dichotomous, such as gender (male or female) and access to an automobile (yes or no). Other variables, such as age, are grouped into discrete categories within which the transit riding characteristics are similar. For example, one would not expect there to be a difference between the transit trip needs of a 14-year-old student and a 15-year-old student. Thus, the survey responses of these two age groups would be assigned to the same age category. However, experience (and common sense) leads one to differentiate a 19-year-old transit user from a 65-year-old transit user, because their purposes for taking trips and their perspectives on the relative value of the fare and the service components are both likely to be different. Obtaining user responses to changes in the fare or service is generally done in one of two ways. The first is to make a statement and ask the responder to mark one of several choices: strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree. The number of statements used in the survey depends on how many parameter changes are being contemplated. Typical statements include: 1. I would increase the number of trips I make each month if the fare were reduced by $0.xx. 2. I would increase the number of trips I make each month if I could purchase a monthly pass. 3. I would increase the number of trips I make each month if the waiting time at the stop were reduced by 10 minutes. 4. I would increase the number of trips I make each month if express services were available from my origin to my destination. The second format is to propose a change and provide multiple choices for the responder. Typical questions for this format are: 1. If the fare were increased by $0.xx per trip I would: a) not change the number of trips per month b) reduce the non-commute trips c) reduce both the commute and non-commute trips d) switch modes 2. If express service were offered for an additional $0.xx per trip I would: a) not change the number of trips per month on this local service b) make additional trips each month c) shift from the local service to the express service Question/Issue Use and analysis of data collected in a survey. Results from a survey of transit users are used to estimate the change in ridership that would result from a change in the service or fare. 1. Research Question/Problem Statement: The transit director is considering changes to the fare structure and the service characteristics of the transit system. To assist in determining which changes would be most effective or efficient, a survey of the current transit riders is developed.

72 effective experiment Design and Data analysis in transportation research These surveys generally are administered by handing a survey form to people as they enter the transit vehicle and collecting them as people depart the transit vehicle. The surveys also can be administered by mail, telephone, or in a face-to-face interview. In constructing the questions, care should be taken to use terms with which the respondents will be familiar. For example, if the system does not currently offer “express” service, this term will need to be defined in the survey. Other technical terms should be avoided. Similarly, the word “mode” is often used by transportation professionals but is not commonly used by the public at large. The length of a survey is almost always an issue as well. To avoid asking too many questions, each question needs to be reviewed to see if it is really necessary and will produce useful data (as opposed to just being something that would be nice to know). 4. Specification of Analysis Technique and Data Analysis: The results of these surveys often are displayed in tables or in frequency distribution diagrams (see also Example 1 and Example 2). Table 28 lists responses to a sample question posed in the form of a statement. Figure 17 shows the frequency diagram for these data. Similar presentations can be made for any of the groupings included in the first type of variables discussed above. For example, if gender is included as a Type 1 question, the results might appear as shown in Table 29 and Figure 18. Figure 18 shows the frequency diagram for these data. Presentations of the data can be made for any combination of the discrete variable groups included in the survey. For example, to display responses of female users over 65 years old, Strongly Agree Agree Neither Agree nor Disagree Disagree Strongly Disagree Total responses 450 600 300 400 100 Table 28. Table of responses to sample statement, “I would increase the number of trips I make each month if the fare were reduced by $0.xx.” 450 600 300 400 100 0 50 100 150 200 250 300 350 400 450 500 550 600 Strongly agree agree neither agree nor disagree disagree strongly disagree Figure 17. Frequency diagram for total responses to sample statement.

examples of effective experiment Design and Data analysis in transportation research 73 all of the survey forms on which these two characteristics (female and over 65 years old) are checked could be extracted and recorded in a table and shown in a frequency diagram. 5. Interpreting the Results: Survey data can be used to compare the responses to fare or service changes of different groups of transit users. This flexibility can be important in determining which changes would impact various segments of transit users. The information can be used to evaluate various fare and service options being considered and allows the transit agency to design promotions to obtain the greatest increase in ridership. For example, by creating fre- quency diagrams to display the responses to statements 2, 3, and 4 listed in Section 3, the engi- neer can compare the impact of changing the fare versus changing the headway or providing express services in the corridor. Organizing response data according to different characteristics of the user produces con- tingency tables like the one illustrated for males and females. This table format can be used to conduct chi-square analysis to determine if there is any statistically significant difference among the various groups. (Chi-square analysis is described in more detail in Example 4.) 6. Conclusions and Discussion: This example illustrates how to obtain and present quan- titative information using surveys. Although survey results provide reasonably good esti- mates of the relative importance users place on different transit attributes (fare, waiting time, hours of service, etc.), when determining how often they would use the system, the magnitude of users’ responses often is overstated. Experience shows that what users say they would do (their stated preference) generally is different than what they actually do (their revealed preference). Strongly Agree Agree Neither Agree nor Disagree Disagree Strongly Disagree Male 200 275 200 200 70 Female 250 325 100 200 30 Total responses 450 600 300 400 100 Table 29. Contingency table showing responses by gender to sample statement, “I would increase the number of trips I make each month if the fare were reduced by $0.xx.” 200 275 200 200 70 250 325 100 200 30 0 50 100 150 200 250 300 350 Strongly agree agree neither agree nor disagree disagree strongly disagree Male Female Figure 18. Frequency diagram showing responses by gender to sample statement.

74 effective experiment Design and Data analysis in transportation research In this example, 1,050 of the 1,850 respondents (57%) have responded that they would use the bus service more frequently if the fare were decreased by $0.xx. Five hundred respondents (27%) have indicated that they would not use the bus service more frequently, and 300 respondents (16%) have indicated that they are not sure if they would change their bus use frequency. These percentages show the stated preferences of the users. The engineer does not yet know the revealed preferences of the users, but experience suggests that it is unlikely that 57% of the riders would actually increase the number of trips they make. 7. Applications in Other Area in Transportation: Survey design and analysis techniques can be used to collect and present data in many areas of transportation research, including: • Transportation Planning—to assess public response to a proposal to enact a local motor fuel tax to improve road maintenance in a city or county. • Traffic Operations—to assess public response to implementing road diets (e.g., 4-lane to 3-lane conversions) on different corridors in a city. • Highway Design—to assess public response to proposed alternative cross-section designs, such as a boulevard design versus an undivided multilane design in a corridor. Example 20: Traffic Operations; Simulation Area: Traffic operations Method of Analysis: Simulation (using field data to simulate, or model, operations or outcomes) 1. Research Question/Problem Statement: A team of engineers wants to determine whether one or more unconventional intersection designs will produce lower travel times than a conventional design at typical intersections for a given number of lanes. There is no way to collect field data to compare alternative intersection designs at a particular site. Macroscopic traffic operations models like those in the Highway Capacity Manual do a good job of estimating delay at specific points but are unable to provide travel time estimates for unconventional designs that consist of several smaller intersections and road segments. Microscopic simulation models measure the behaviors of individual vehicles as they traverse the highway network. Such simulation models are therefore very flexible in the types of networks and measures that can be examined. The team in this example turns to a simulation model to determine how other intersection designs might work. Question/Issue Developing and using a computer simulation model to examine operations in a computer environment. In this example, a traffic operations simulation model is used to show whether one or more unconventional intersection designs will produce lower travel times than a conventional design at typical intersections for a given number of lanes. 2. Identification and Description of Variables: The engineering team simulates seven different intersections to provide the needed scope for their findings. At each intersection, the team examines three different sets of traffic volumes: volumes from the evening (p.m.) peak hour, a typical midday off-peak hour, and a volume that is 15% greater than the p.m. peak hour to represent future conditions. At each intersection, the team models the current conventional intersection geometry and seven unconventional designs: the quadrant roadway, median U-turn, superstreet, bowtie, jughandle, split intersection, and continuous flow intersection. Traffic simulation models break the roadway network into nodes (intersections) and links (segments between intersections). Therefore, the engineering team has to design each of the

examples of effective experiment Design and Data analysis in transportation research 75 alternatives at each test site in terms of numbers of lanes, lane lengths, and such, and then faithfully translate that geometry into links and nodes that the simulation model can use. For each combination of traffic volume and intersection design, the team uses software to find the optimum signal timing and uses that during the simulation. To avoid bias, the team keeps all other factors (e.g., network size, numbers of lanes, turn lane lengths, truck percentages, average vehicle speeds) constant in all simulation runs. 3. Data Collection: The field data collection necessary in this effort consists of noting the current intersection geometries at the seven test intersections and counting the turning movements in the time periods described above. In many simulation efforts, it is also necessary to collect field data to calibrate and validate the simulation model. Calibration is the process by which simulation output is compared to actual measurements for some key measure(s) such as travel time. If a difference is found between the simulation output and the actual measurement, the simulation inputs are changed until the difference disappears. Validation is a test of the calibrated simulation model, comparing simulation output to a previously unused sample of actual field measurements. In this example, however, the team determines that it is unnecessary to collect calibration and validation data because a recent project has successfully calibrated and validated very similar models of most of these same unconventional designs. The engineer team uses the CORSIM traffic operations simulation model. Well known and widely used, CORSIM models the movement of each vehicle through a specified network in small time increments. CORSIM is a good choice for this example because it was originally designed for problems of this type, has produced appropriate results, has excellent animation and other debugging features, runs quickly in these kinds of cases, and is well-supported by the software developers. The team makes two CORSIM runs with different random number seeds for each combina- tion of volume and design at each intersection, or 48 runs for each intersection altogether. It is necessary to make more than one run (or replication) of each simulation combination with different random number seeds because of the randomness built into simulation models. The experiment design in this case allows the team to reduce the number of replications to two; typical practice in simulations when one is making simple comparisons between two variables is to make at least 5 to 10 replications. Each run lasts 30 simulated minutes. Table 30 shows the simulation data for one of the seven intersections. The lowest travel time produced in each case is bolded. Notice that Table 30 does not show data for the bowtie design. That design became congested (gridlocked) and produced essentially infinite travel times for this intersection. Handling overly congested networks is a difficult problem in many efforts and with several different simulation software packages. The best current advice is for analysts to not push their networks too hard and to scan often for gridlock. 4. Specification of Analysis Technique and Data Analysis: The experiment assembled in this example uses a factorial design. (Factorial design also is discussed in Example 11.) The team analyzes the data from this factorial experiment using analysis of variance (ANOVA). Because Time of Day Total Travel Time, Vehicle-hours, Average of Two Simulation Runs Conventional Quadrant Median U Superstreet Jughandle Split Continuous Midday 67 64 61 74 63 59* 75 P.M. peak 121 95 119 179 139 114 106 Peak + 15% 170 *Lowest total travel time. 135 145 245 164 180 142 Table 30. Simulation results for different designs and time of day.

76 effective experiment Design and Data analysis in transportation research the experimenter has complete control in a simulation, it is common to use efficient designs like factorials and efficient analysis methods like ANOVA to squeeze all possible information out of the effort. Statistical tests comparing the individual mean values of key results by factor are common ways to follow up on ANOVA results. Although ANOVA will reveal which factors make a significant contribution to the overall variance in the dependent variable, means tests will show which levels of a significant factor differ from the other levels. In this example, the team uses Tukey’s means test, which is available as part of the battery of standard tests accom- panying ANOVA in statistical software. (For more information about ANOVA, see NCHRP Project 20-45, Volume 2, Chapter 4, Section A.) 5. Interpreting the Results: For the data shown in Table 30, the ANOVA reveals that the volume and design factors are statistically significant at the 99.99% confidence level. Furthermore, the interaction between the volume and design factors also is statistically significant at the 99.99% level. The means tests on the design factors show that the quadrant roadway is significantly different from (has a lower overall travel time than) the other designs at the 95% level. The next- best designs overall are the median U-turn and the continuous flow intersection; these are not statistically different from each other at the 95% level. The third tier of designs consists of the conventional and the split, which are statistically different from all others at the 95% level but not from each other. Finally, the jughandle and the superstreet designs are statistically different from each other and from all other designs at the 95% level according to the means test. Through the simulation, the team learns that several designs appear to be more efficient than the conventional design, especially at higher volume levels. From the results at all seven intersections, the team sees that the quadrant roadway and median U-turn designs generally lead to the lowest travel times, especially with the higher volume levels. 6. Conclusion and Discussion: Simulation is an effective tool to analyze traffic operations, as at the seven intersections of interest in this example. No other tool would allow such a robust comparison of many different designs and provide the results for travel times in a larger net- work rather than delays at a single spot. The simulation conducted in this example also allows the team to conduct an efficient factorial design, which maximizes the information provided from the effort. Simulation is a useful tool in research for traffic operations because it • affords the ability to conduct randomized experiments, • allows the examination of details that other methods cannot provide, and • allows the analysis of large and complex networks. In practice, simulation also is popular because of the vivid and realistic animation output provided by common software packages. The superb animations allow analysts to spot and treat flaws in the design or model and provide agencies an effective tool by which to share designs with politicians and the public. Although simulation results can sometimes be surprising, more often they confirm what the analysts already suspect based on simpler analyses. In the example described here, the analysts suspected that the quadrant roadway and median U-turn designs would perform well because these designs had performed well in prior Highway Capacity Manual calculations. In many studies, simulations provide rich detail and vivid animation but no big surprises. 7. Applications in Other Areas of Transportation Research: Simulations are critical analysis methods in several areas of transportation research. Besides traffic operations, simulations are used in research related to: • Maintenance—to model the lifetime performance of traffic signs. • Traffic Safety – to examine vehicle performance and driver behaviors or performance. – to predict the number of collisions from a new roadway design (potentially, given the recent development of the FHWA SSAM program).

examples of effective experiment Design and Data analysis in transportation research 77 Example 21: Traffic Safety; Non-parametric Methods Area: Traffic safety Method of Analysis: Non-parametric methods (methods used when data do not follow assumed or conventional distributions, such as when comparing median values) 1. Research Question/Problem Statement: A city traffic engineer has been receiving many citizen complaints about the perceived lack of safety at unsignalized midblock crosswalks. Apparently, some motorists seem surprised by pedestrians in the crosswalks and do not yield to the pedestrians. The engineer believes that larger and brighter warning signs may be an inexpensive way to enhance safety at these locations. Question/Issue Determine whether some treatment has an effect when data to be tested do not follow known distributions. In this example, a nonparametric method is used to determine whether larger and brighter warning signs improve pedestrian safety at unsignalized midblock crosswalks. The null hypothesis and alternative hypothesis are stated as follows: Ho: There is no difference in the median values of the number of conflicts before and after a treatment. Ha: There is a difference in the median values. 2. Identification and Description of Variables: The engineer would like to collect collision data at crosswalks with improved signs, but it would take a long time at a large sample of crosswalks to collect a reasonable sample size of collisions to answer the question. Instead, the engineer collects data for conflicts, which are near-collisions when one or both of the involved entities brakes or swerves within 2 seconds of a collision to avoid the collision. Research literature has shown that conflicts are related to collisions, and because conflicts are much more numerous than collisions, it is much quicker to collect a good sample size. Conflict data are not nearly as widely used as collision data, however, and the underlying distribution of conflict data is not clear. Thus, the use of non-parametric methods seems appropriate. 3. Data Collection: The engineer identifies seven test crosswalks in the city based on large pedes- trian volumes and the presence of convenient vantage points for observing conflicts. The engi- neering staff collects data on traffic conflicts for 2 full days at each of the seven crosswalks with standard warning signs. The engineer then has larger and brighter warning signs installed at the seven sites. After waiting at least 1 month at each site after sign installation, the staff again collects traffic conflicts for 2 full days, making sure that weather, light, and as many other conditions as possible are similar between the before-and-after data collection periods at each site. 4. Specification of Analysis Technique and Data Analysis: A nonparametric statistical test is an efficient way to analyze data when the underlying distribution is unclear (as in this example using conflict data) and when the sample size is small (as in this example with its small number of sites). Several such tests, such as the sign test and the Wilcoxon signed-rank (Wilcoxon rank-sum) test are plausible in this example. (For more information about nonparametric tests, see NCHRP Project 20-45, Volume 2, Chapter 6, Section D, “Hypothesis About Population Medians for Independent Samples.” ) The decision is made to use the Wilcoxon signed-rank test because it is a more powerful test for paired numerical measurements than other tests, and this example uses paired (before-and-after) measurements. The sign test is a popular nonparametric test for paired data but loses information contained in numerical measurements by reducing the data to a series of positive or negative signs.

78 effective experiment Design and Data analysis in transportation research Having decided on the Wilcoxon signed-rank test, the engineer arranges the data (see Table 31). The third row of the table is the difference between the frequencies of the two conflict measurements at each site. The last row shows the rank order of the sites from lowest to highest based on the absolute value of the difference. Site 3 has the least difference (35 - 33 = 2) while Site 7 has the greatest difference (54 - 61 = -16). The Wilcoxon signed-rank test ranks the differences from low to high in terms of absolute values. In this case, that would be 2, 3, 7, 7, 12, 15, and 16. The test statistic, x, is the sum of the ranks that have positive differences. In this example, x = 1 + 2 + 3.5 + 3.5 + 6 = 16. Notice that all but the sixth and seventh ranked sites had positive differences. Notice also that the tied differences were assigned ranks equal to the average of the ranks they would have received if they were just slightly different from each other. The engineer then consults a table for the Wilcoxon signed-rank test to get a critical value against which to compare. (Such a table appears in NCHRP Project 20-45, Volume 2, Appendix C, Table C-8.) The standard table for a sample size of seven shows that the critical value for a one-tailed test (testing whether there is an improvement) with a confidence level of 95% is x = 24. 5. Interpreting the Results: Because the calculated value (x = 16) is less than the critical value (x = 24), the engineer concludes that there is not a statistically significant difference between the number of conflicts recorded with standard signs and the number of conflicts recorded with larger and brighter signs. 6. Conclusion and Discussion: Nonparametric tests do not require the engineer to make restric- tive assumptions about an underlying distribution and are therefore good choices in cases like this, in which the sample size is small and the data collected do not have a familiar underlying distribution. Many nonparametric tests are available, so analysts should do some reading and searching before settling on the best one for any particular case. Once a nonparametric test is determined, it is usually easy to apply. This example also illustrates one of the potential pitfalls of statistical testing. The engineer’s conclusion is that there is not a statistically significant difference between the number of conflicts recorded with standard signs and the number of conflicts recorded with larger and brighter signs. That conclusion does not necessarily mean that larger and brighter signs are a bad idea at sites similar to those tested. Notice that in this experiment, larger and brighter signs produced lower conflict frequencies at five of the seven sites, and the average number of conflicts per site was lower with the larger and brighter signs. Given that signs are relatively inexpensive, they may be a good idea at sites like those tested. A statistical test can provide useful information, especially about the quality of the experiment, but analysts must be careful not to interpret the results of a statistical test too strictly. In this example, the greatest danger to the validity of the test result lies not in the statistical test but in the underlying before-and-after test setup. For the results to be valid, it is necessary that the only important change that affects conflicts at the test sites during data collection be Site 1 Site 2 Site 3 Site 4 Site 5 Site 6 Site 7 Standard signs 170 39 35 32 32 19 45 Larger and brighter signs 155 26 33 29 25 31 61 Difference 15 7 2 3 7 -12 -16 Rank of absolute difference 6 73.5 1 2 3.5 5 Table 31. Number of conflicts recorded during each (equal) time period at each site.

examples of effective experiment Design and Data analysis in transportation research 79 the new signs. The engineer has kept the duration short between the before-and-after data collection periods, which helps minimize the chances of other important changes. However, if there is any reason to suspect other important changes, these test results should be viewed skeptically and a more sophisticated test strategy should be employed. 7. Applications in Other Areas of Transportation Research: Nonparametric tests are helpful when researchers are working with small sample sizes or sample data wherein the underlying distribution is unknown. Examples of other areas of transportation research in which non- parametric tests may be applied include: • Transportation Planning, Public Transportation—to analyze data from surveys and questionnaires when the scale of the response calls into question the underlying distribution. Such data are often analyzed in transportation planning and public transportation. • Traffic Operations—to analyze small samples of speed or volume data. • Structures, Pavements—to analyze quality ratings of pavements, bridges, and other trans- portation assets. Such ratings also use scales. Resources The examples used in this report have included references to the following resources. Researchers are encouraged to consult these resources for more information about statistical procedures. Freund, R. J. and W. J. Wilson (2003). Statistical Methods. 2d ed. Burlington, MA: Academic Press. See page 256 for a discussion of Tukey’s procedure. Kutner, M. et al. (2005). Applied Linear Statistical Models. 5th ed. Boston: McGraw-Hill. See page 746 for a discussion of Tukey’s procedure. NCHRP CD-22: Scientific Approaches to Transportation Research, Vol. 1 and 2. 2002. Transpor- tation Research Board of the National Academies, Washington, D.C. This two-volume electronic manual developed under NCHRP Project 20-45 provides a comprehensive source of information on the conduct of research. The manual includes state-of-the-art techniques for problem state- ment development; literature searching; development of the research work plan; execution of the experiment; data collection, management, quality control, and reporting of results; and evaluation of the effectiveness of the research, as well as the requirements for the systematic, pro- fessional, and ethical conduct of transportation research. For readers’ convenience, the references to NCHRP Project 20-45 from the various examples contained in this report are summarized here by topic and location in NCHRP CD-22. More information about NCHRP CD-22 is available at http://www.trb.org/Main/Blurbs/152122.aspx. • Analysis of Variance (one-way ANOVA and two-way ANOVA): See Volume 2, Chapter 4, Section A, Analysis of Variance Methodology (pp. 113, 119–31). • Assumptions for residual errors: See Volume 2, Chapter 4. • Box plots; Q-Q plots: See Volume 2, Chapter 6, Section C. • Chi-square test: See Volume 2, Chapter 6, Sections E (Chi-Square Test for Independence) and F. • Chi-square values: See Volume 2, Appendix C, Table C-2. • Computations on unbalanced designs and multi-factorial designs: See Volume 2, Chapter 4, Section A, Analysis of Variance Methodology (pp. 119–31). • Confidence intervals: See Volume 2, Chapter 4. • Correlation coefficient: See Volume 2, Appendix A, Glossary, Correlation Coefficient. • Critical F-value: See Volume 2, Appendix C, Table C-5. • Desirable and undesirable residual plots (scatter plots): See Volume 2, Chapter 4, Section B, Figure 6.

80 effective experiment Design and Data analysis in transportation research • Equation fit: See Volume 2, Chapter 4, Glossary, Descriptive Measures of Association Between X and Y. • Error distributions (normality, constant variance, uncorrelated, etc.): See Volume 2, Chapter 4 (pp. 146–55). • Experiment design and data collection: See Volume 2, Chapter 1. • Fcrit and F-distribution table: See Volume 2, Appendix C, Table C-5. • F-test (or F-test): See Volume 2, Chapter 4, Section A, Compute the F-ratio Test Statistic (p. 124). • Formulation of formal hypotheses for testing: See Volume 1, Chapter 2, Hypothesis; Volume 2, Appendix A, Glossary. • History and maturation biases (specification errors): See Volume 2, Chapter 1, Quasi- Experiments. • Indicator (dummy) variables: See Volume 2, Chapter 4 (pp. 142–45). • Intercept and slope: See Volume 2, Chapter 4 (pp. 140–42). • Maximum likelihood methods: See Volume 2, Chapter 5 (pp. 208–11). • Mean and standard deviation formulas: See Volume 2, Chapter 6, Table C, Frequency Distribu- tions, Variance, Standard Deviation, Histograms, and Boxplots. • Measured ratio or interval scale: See Volume 2, Chapter 1 (p. 83). • Multinomial distribution and polychotomous logistical model: See Volume 2, Chapter 5 (pp. 211–18). • Multiple (multivariate) regression: See Volume 2, Chapter 4, Section B. • Non-parametric tests: See Volume 2, Chapter 6, Section D. • Normal distribution: See Volume 2, Appendix A, Glossary, Normal Distribution. • One- and two-sided hypothesis testing (one- and two-tail test values): See Volume 2, Chapter 4 (pp. 161 and 164–5). • Ordinary least squares (OLS) regression: See Volume 2, Chapter 4, Section B, Linear Regression. • Sample size and confidence: See Volume 2, Chapter 1, Sample Size Determination. • Sample size determination based on statistical power requirements: See Volume 2, Chapter 1, Sample Size Determination (p. 94). • Sign test and the Wilcoxon signed-rank (Wilcoxon rank-sum) test: See Volume 2, Chapter 6, Section D, and Appendix C, Table C-8, Hypothesis About Population Medians for Independent Samples. • Split samples: See Volume 2, Chapter 4, Section A, Analysis of Variance Methodology (pp. 119–31). • Standard chi-square distribution table: See Volume 2, Appendix C, Table C-2. • Standard normal values: See Volume 2, Appendix C, Table C-1. • tcrit values: See Volume 2, Appendix C, Table C-4. • t-statistic: See Volume 2, Appendix A, Glossary. • t-statistic using equation for equal variance: See Volume 2, Appendix C, Table C-4. • t-test: See Volume 2, Chapter 4, Section B, How are t-statistics Interpreted? • Tabularized values of t-statistic: See Volume 2, Appendix C, Table C-4. • Tukey’s test, Bonferroni’s test, Scheffe’s test: See Volume 2, Chapter 4, Section A, Analysis of Variance Methodology (pp. 119–31). • Types of data and implications for selection of analysis techniques: See Volume 2, Chapter 1, Identification of Empirical Setting.

Abbreviations and acronyms used without definitions in TRB publications: AAAE American Association of Airport Executives AASHO American Association of State Highway Officials AASHTO American Association of State Highway and Transportation Officials ACI–NA Airports Council International–North America ACRP Airport Cooperative Research Program ADA Americans with Disabilities Act APTA American Public Transportation Association ASCE American Society of Civil Engineers ASME American Society of Mechanical Engineers ASTM American Society for Testing and Materials ATA American Trucking Associations CTAA Community Transportation Association of America CTBSSP Commercial Truck and Bus Safety Synthesis Program DHS Department of Homeland Security DOE Department of Energy EPA Environmental Protection Agency FAA Federal Aviation Administration FHWA Federal Highway Administration FMCSA Federal Motor Carrier Safety Administration FRA Federal Railroad Administration FTA Federal Transit Administration HMCRP Hazardous Materials Cooperative Research Program IEEE Institute of Electrical and Electronics Engineers ISTEA Intermodal Surface Transportation Efficiency Act of 1991 ITE Institute of Transportation Engineers NASA National Aeronautics and Space Administration NASAO National Association of State Aviation Officials NCFRP National Cooperative Freight Research Program NCHRP National Cooperative Highway Research Program NHTSA National Highway Traffic Safety Administration NTSB National Transportation Safety Board PHMSA Pipeline and Hazardous Materials Safety Administration RITA Research and Innovative Technology Administration SAE Society of Automotive Engineers SAFETEA-LU Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (2005) TCRP Transit Cooperative Research Program TEA-21 Transportation Equity Act for the 21st Century (1998) TRB Transportation Research Board TSA Transportation Security Administration U.S.DOT United States Department of Transportation

TRB’s National Cooperative Highway Research Program (NCHRP) Report 727: Effective Experiment Design and Data Analysis in Transportation Research describes the factors that may be considered in designing experiments and presents 21 typical transportation examples illustrating the experiment design process, including selection of appropriate statistical tests.

The report is a companion to NCHRP CD-22, Scientific Approaches to Transportation Research, Volumes 1 and 2 , which present detailed information on statistical methods.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Data Analysis in Research | Methods, Techniques & Examples

Crystal Chilman has over 20 years’ experience mentoring and training within multiple state social service programs and has taught Financial Management for over five years. They have a Master’s in Management for Public Administration from the University of Phoenix and a Bachelor of Social Science with a formal Business minor from Washington State University.

Brianna has a masters of education in educational leadership, a DBA business management, and a BS in animal science.

Lesley has taught American and World History at the university level for the past seven years. She has a Master's degree in History.

Prompts About Data Analysis:

Definition prompt:.

In at least three to four sentences, provide the definition of data analysis and explain what it can be used to ascertain.

Example: Data analysis can help you solve problems.

Graphic Organizer Prompt:

Create a poster, chart, or some other type of graphic organizer that defines and shows the differences between qualitative analysis and quantitative analysis.

Tip: It can be helpful to provide hints to help remember the differences between the two (i.e., qualitative analysis derives from ''quality,'' so it assesses the qualities of things).

Essay Prompt 1:

Write an essay of at least three to four paragraphs that explains the common research techniques of qualitative analysis.

Example: Observations of participants being studied.

Essay Prompt 2:

In approximately three to four paragraphs, write an essay that describes the research methods of quantitative analysis.

Example: Mathematics is at the heart of quantitative analysis.

Scenario Prompt:

In an essay of at least one page, write about a research scenario in which you could use quantitative and qualitative analysis. Provide examples of how you would use each type of analysis.

Example: You want to study why some people prefer SUVs to sedans. You could interview SUV drivers for qualitative analysis and you could look at statistics of SUV and sedan ownership for quantitative analysis.

What are the different types of data analysis?

There are two major types of data analysis methods that are used in research: qualitative analysis, which is characteristics-focused, and quantitative analysis, which is numbers-focused. Within these types are multiple subcategories, such as text analysis, statistical analysis, diagnostic analysis, and predictive analysis.

What are some examples of data analysis?

Data analysis is the systematic process of investigating, through varied techniques, facts and figures to make conclusions about a specific question or topic. Examples include analyzing data gathered from customer surveys, conducting interviews, or reviewing case files.

Table of Contents

What is data analysis in research, importance of data analysis, data analysis methods, data analysis techniques and steps, examples of data analysis in research papers, lesson summary.

Data analysis in research is the systematic process of investigating, through varied techniques, facts and figures to make conclusions about a specific question or topic. Data is available in many different forms and can be found in many different places. Whether the data is on paper or in a database, across town or across the world, one must gather it together and transform it to make it understandable. This process is called data analysis, and examples can include gathering surveys from customers, conducting interviews, or reviewing case files.

To unlock this lesson you must be a Study.com Member. Create your account

data analysis procedure in research example

An error occurred trying to load this video.

Try refreshing the page, or contact customer support.

You must c C reate an account to continue watching

Register to view this lesson.

As a member, you'll also get unlimited access to over 88,000 lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you succeed.

Get unlimited access to over 88,000 lessons.

Already registered? Log in here for access

Resources created by teachers for teachers.

I would definitely recommend Study.com to my colleagues. It’s like a teacher waved a magic wand and did the work for me. I feel like it’s a lifeline.

You're on a roll. Keep up the good work!

Just checking in. are you still watching.

  • 0:02 A Beginning Look at…
  • 1:08 Methods of Data Anaylsis
  • 1:44 Qualitative Techniques
  • 2:46 Quantitative Techniques
  • 3:36 Lesson Summary

Data analysis in research is important, as it helps people make better decisions, solve problems, and reduce costs (e.g., operational costs in business). Through the process of data analysis, the data that was once in many places, and in many forms, is transformed into a consistent, clean format so researchers can group and analyze the outcomes. Data can provide information that can support proper budgeting, ensure appropriate resource allocation, or reveal investment opportunities.

There are two major types of data analysis methods that are used in research. Qualitative analysis uses non-numerical data (e.g., text or images) and subjective judgment (i.e., judgement that is based on the personal feelings of the reviewer) to determine groupings. It is considered characteristics-focused, as objects within the analysis are grouped together based on their characteristics. The other type is quantitative analysis, which is numbers-focused and uses a variety of calculations or statistics to gather meaning from the data.

Within the two major types of data analysis methods, there are also several subcategories. The type of analysis used by the research group depends on what question they are trying to answer and how much data needs to be analyzed. For example, for research projects with large text sets, the best qualitative analysis subcategory type would be text analysis, which uses computer software to review unstructured data sources to gain an understanding of the organization. Unstructured data , such as emails, social media, and customer survey responses, can be quickly read to identify patterns in written text, which could identify possible problems so they can be fixed quickly. For example, software can review social media for negative feedback and bring this to the awareness of managers so it can be quickly resolved. If the research is focused more on answering questions such as "How many?" or "How much?" the quantitative subcategory of statistical analysis , which uses a systematic approach with defined rules and procedures to review data, would be the better approach.

One can also use diagnostic or predictive analysis. Diagnostic analysis is working to understand the reasons behind the problem and is sometimes called "root cause analysis." This type of analysis looks for patterns in the data, as well as any dependencies that are causing the issue. This can be done by machines, but then needs human review to put patterns into context. Predictive analytics uses historical data trends to make possible predictions for the future. For example, by doing this type of analysis, a retail store can predict what type of staffing levels they may need to have for the holiday season. This is essential, because they will need to hire extra workers to take care of the expected increase in customers. The reason they know that their customer base typically increases during the holiday season is because predictive analytics showed their historical trends.

Although multiple data analysis techniques exist, there are typically five main steps to complete a data analysis:

  • Determine the scope of the study. In this step, researchers will identify what is being examined and what questions are trying to be answered.
  • Collect the data. This will involve deciding what data elements are needed to answer the question, what data is available, and how to obtain it.
  • Process the data. When processing the data, it will need to be organized and then placed in consistent formats that can be analyzed.
  • Analyze the data. This step is where researchers start to group the data to identify characteristics, trends, or patterns.
  • Infer and interpret results. This is the final product of the analysis that is provided to the audience. This is where researchers have reviewed the data and make conclusions based on their findings.

There are many different examples of data analysis in research papers. The data analysis type used is based on what question is being asked and the resources available to the group providing the research. For example, qualitative analysis uses different types of research strategies to gather outcomes, such as participant observation, where the researcher is part of the group being observed over a prolonged term. An example of this would be the ten-year study on the international steel trading done by Iacono and Brown in their 2009 research paper "A Case Example of Participant Observation." This research focused on the evolution of steel electronic commerce, which was published in the Electronic Journal of Business Research Methods . For quantitative research , research techniques can include tracking participants through websites or reviewing survey responses. A current example of extensive research tracking is Google Analytics, which uses "tracking cookies" to collect data about how customers use a website. This information is provided to the owners of the website, who then use this data to make changes based on the customer analysis outcomes.

Regression analysis can be used to help understand relationships and contributing factors to outcomes. Examples of where this could be used is in modeling retention rates of students, such as what was done in the Journal of Engineering Education , when researchers did a "Comparison of transfer shock and graduation rates across engineering transfer student populations." The study results informed institutions of the importance of recognizing engineering transfer students as a non-homogeneous population.

Regardless of the data analysis method chosen, the output of data analysis is typically a textual report; however, visualizations of the data are also included. These could be in the form of graphs, charts, or maps. These visualizations are essential, as they provide quick insight into what the data is showing, improve understanding of the outputs (such as grouping and outliers), and allow for a review of the data quality.

Examples of graphs and charts

Data analysis in research is the systematic process of investigating facts and figures to make conclusions about a specific question or topic; there are two major types of data analysis methods in research. Qualitative analysis ensures objects are grouped together based on their characteristics to gather meaning. It uses research techniques such as participant observations to evaluate and interpret outcomes. Quantitative analysis uses numbers, mathematics, and statistics to gather meaning from data, and uses research techniques such as tracking or surveys to understand and gather information related to the behavior or actions of participants.

There are many different examples of data analysis in research papers and there are many data analysis techniques that can be used. Regardless of whether the approach to data analysis uses a qualitative technique such as participant observation or a quantitative technique such as regression, there are five steps to complete a data analysis. These steps include determining the scope of the project; collecting, processing, and analyzing the data; and inferring and interpreting results. Although the outcomes of data research projects and papers are typically textual, visualizations such as graphs, charts, or maps are usually provided to share context and provide in-depth understanding of the topic researched and outcomes provided.

Video Transcript

A beginning look at data analysis.

Let's imagine that you have just enrolled in your first college course. After two days of class, your professor assigns you a research assignment. You are to research which type of school system is better, private or public. Immediately, you have an opinion of which system you feel is better, but you realize that conducting research is not about your own personal opinion. Research is about gathering data that you can analyze and use to come to some sort of conclusion. So, before you begin your data collection, you realize that you have a lot to learn about the various methods and techniques of gathering data.

Before we look at the methods and techniques of data analysis, lets first define what data analysis is. Data analysis is the collecting and organizing of data so that a researcher can come to a conclusion. Data analysis allows one to answer questions, solve problems, and derive important information. So, for your assignment, you now know that the purpose of your assignment is to gather enough information to come to a conclusion about which school system is better.

Methods of Data Analysis

Okay, you have decided to prove that public school is better than private school, but now you need to figure out how you will collect the information and data needed to support that idea. There are two methods that a researcher can pursue: qualitative and quantitative.

Qualitative research revolves around describing characteristics. It does not use numbers. A good way to remember qualitative research is to think of quality.

Quantitative research is the opposite of qualitative research because its prime focus is numbers. Quantitative research is all about quantity.

Qualitative Data Analysis Techniques

Qualitative research works with descriptions and characteristics. Let's look at some of the more common qualitative research techniques:

Participant observation is when the researcher becomes part of the group that they are observing. This technique can take a long period of time because the researcher needs to be accepted into the group so that they observe data that is natural.

Direct observation is where the researcher is strictly observing and not taking part in any of the activities. An example of this might be observing a classroom in both a private and public school so that you can see the differences in teaching styles.

With interviews, the researcher is able to talk with participants and ask them questions and gather their opinions. So for your research, you might ask the students why they like their school.

Case studies can be customized for each research study. The researcher develops a study that revolves around the participants. They can use a variety of techniques including from interviews, participant observations, etc.

Quantitative Data Analysis Techniques

Quantitative research uses numbers. This means, that there is usually a substantial amount of mathematics used with a quantitative study. Let's look at some of the techniques:

The goal of a survey is to gather responses from the participants through questions. For your assignment about school systems, you might mail out a survey to the teachers at both public and private schools. This will allow you to gather the different perspectives from each type of school.

Tracking involves tracking the behavior or actions of participants. A great example of this are websites that track customers that visit their sites.

Experiments can be customized for the type of research product. They are a way to test some factor. An example might be taking children from a public school and placing them in a private school for a day.

Data analysis has two prominent methods: qualitative research and quantitative research . Each method has their own techniques. Interviews and observations are forms of qualitative research, while experiments and surveys are quantitative research.

Unlock Your Education

See for yourself why 30 million people use study.com, become a study.com member and start learning now..

Already a member? Log In

Recommended Lessons and Courses for You

Related lessons, related courses, recommended lessons for you.

Data Analysis | Definition, Explanation & Examples

Data Analysis in Research | Methods, Techniques & Examples Related Study Materials

  • Related Topics

Browse by Courses

  • Study.com ACT® Test Prep: Help and Review
  • PSAT Prep: Help and Review
  • PSAT Prep: Tutoring Solution
  • SAT Prep: Tutoring Solution
  • Business 109: Intro to Computing
  • Common Core Math - Geometry: High School Standards
  • CSET Business Subtest II Prep
  • Praxis Business Education: Content Knowledge (5101) Prep
  • CSET Business Subtest I Prep
  • Math 104: Calculus
  • Math 102: College Mathematics
  • Precalculus Algebra: Certificate Program
  • CLEP Calculus Prep
  • Principles of Marketing: Help and Review
  • Precalculus: Tutoring Solution

Browse by Lessons

  • Analytical & Informational Reports | Overview & Examples
  • Types of Data Analysis | Overview & Examples
  • Data Analysis | Definition, Explanation & Examples
  • Study.com ACT® Science Reasoning Test Prep: Interpreting Graphs and Charts of Scientific Data Practice Problems
  • Study.com ACT® Science Reasoning Test Prep: Interpreting Tables of Scientific Data Practice Problems
  • Practice Analyzing and Interpreting an Article
  • The Planning Processes for Informational and Analytical Reports
  • Practice Analyzing and Interpreting an Editorial
  • Scientific Data Analysis | Definition, Methods & Examples
  • Quantitative Business Analysis | Methods, Uses & Examples
  • Data Analysis: Purpose & Steps
  • Planning for Informational & Analytical Reports
  • What is Data Analytics? - Definition & Tools
  • Applications of Advanced Data Analysis in Marketing Research
  • Analyzing Business Data: Tools & Methods

Create an account to start this course today Used by over 30 million students worldwide Create an account

Explore our library of over 88,000 lessons

  • Foreign Language
  • Social Science
  • See All College Courses
  • Common Core
  • High School
  • See All High School Courses
  • College & Career Guidance Courses
  • College Placement Exams
  • Entrance Exams
  • General Test Prep
  • K-8 Courses
  • Skills Courses
  • Teacher Certification Exams
  • See All Other Courses
  • Create a Goal
  • Create custom courses
  • Get your questions answered
  • AI & NLP
  • Churn & Loyalty
  • Customer Experience
  • Customer Journeys
  • Customer Metrics
  • Feedback Analysis
  • Product Experience
  • Product Updates
  • Sentiment Analysis
  • Surveys & Feedback Collection
  • Try Thematic

Welcome to the community

data analysis procedure in research example

Qualitative Data Analysis: Step-by-Step Guide (Manual vs. Automatic)

When we conduct qualitative methods of research, need to explain changes in metrics or understand people's opinions, we always turn to qualitative data. Qualitative data is typically generated through:

  • Interview transcripts
  • Surveys with open-ended questions
  • Contact center transcripts
  • Texts and documents
  • Audio and video recordings
  • Observational notes

Compared to quantitative data, which captures structured information, qualitative data is unstructured and has more depth. It can answer our questions, can help formulate hypotheses and build understanding.

It's important to understand the differences between quantitative data & qualitative data . But unfortunately, analyzing qualitative data is difficult. While tools like Excel, Tableau and PowerBI crunch and visualize quantitative data with ease, there are a limited number of mainstream tools for analyzing qualitative data . The majority of qualitative data analysis still happens manually.

That said, there are two new trends that are changing this. First, there are advances in natural language processing (NLP) which is focused on understanding human language. Second, there is an explosion of user-friendly software designed for both researchers and businesses. Both help automate the qualitative data analysis process.

In this post we want to teach you how to conduct a successful qualitative data analysis. There are two primary qualitative data analysis methods; manual & automatic. We will teach you how to conduct the analysis manually, and also, automatically using software solutions powered by NLP. We’ll guide you through the steps to conduct a manual analysis, and look at what is involved and the role technology can play in automating this process.

More businesses are switching to fully-automated analysis of qualitative customer data because it is cheaper, faster, and just as accurate. Primarily, businesses purchase subscriptions to feedback analytics platforms so that they can understand customer pain points and sentiment.

Overwhelming quantity of feedback

We’ll take you through 5 steps to conduct a successful qualitative data analysis. Within each step we will highlight the key difference between the manual, and automated approach of qualitative researchers. Here's an overview of the steps:

The 5 steps to doing qualitative data analysis

  • Gathering and collecting your qualitative data
  • Organizing and connecting into your qualitative data
  • Coding your qualitative data
  • Analyzing the qualitative data for insights
  • Reporting on the insights derived from your analysis

What is Qualitative Data Analysis?

Qualitative data analysis is a process of gathering, structuring and interpreting qualitative data to understand what it represents.

Qualitative data is non-numerical and unstructured. Qualitative data generally refers to text, such as open-ended responses to survey questions or user interviews, but also includes audio, photos and video.

Businesses often perform qualitative data analysis on customer feedback. And within this context, qualitative data generally refers to verbatim text data collected from sources such as reviews, complaints, chat messages, support centre interactions, customer interviews, case notes or social media comments.

How is qualitative data analysis different from quantitative data analysis?

Understanding the differences between quantitative & qualitative data is important. When it comes to analyzing data, Qualitative Data Analysis serves a very different role to Quantitative Data Analysis. But what sets them apart?

Qualitative Data Analysis dives into the stories hidden in non-numerical data such as interviews, open-ended survey answers, or notes from observations. It uncovers the ‘whys’ and ‘hows’ giving a deep understanding of people’s experiences and emotions.

Quantitative Data Analysis on the other hand deals with numerical data, using statistics to measure differences, identify preferred options, and pinpoint root causes of issues.  It steps back to address questions like "how many" or "what percentage" to offer broad insights we can apply to larger groups.

In short, Qualitative Data Analysis is like a microscope,  helping us understand specific detail. Quantitative Data Analysis is like the telescope, giving us a broader perspective. Both are important, working together to decode data for different objectives.

Qualitative Data Analysis methods

Once all the data has been captured, there are a variety of analysis techniques available and the choice is determined by your specific research objectives and the kind of data you’ve gathered.  Common qualitative data analysis methods include:

Content Analysis

This is a popular approach to qualitative data analysis. Other qualitative analysis techniques may fit within the broad scope of content analysis. Thematic analysis is a part of the content analysis.  Content analysis is used to identify the patterns that emerge from text, by grouping content into words, concepts, and themes. Content analysis is useful to quantify the relationship between all of the grouped content. The Columbia School of Public Health has a detailed breakdown of content analysis .

Narrative Analysis

Narrative analysis focuses on the stories people tell and the language they use to make sense of them.  It is particularly useful in qualitative research methods where customer stories are used to get a deep understanding of customers’ perspectives on a specific issue. A narrative analysis might enable us to summarize the outcomes of a focused case study.

Discourse Analysis

Discourse analysis is used to get a thorough understanding of the political, cultural and power dynamics that exist in specific situations.  The focus of discourse analysis here is on the way people express themselves in different social contexts. Discourse analysis is commonly used by brand strategists who hope to understand why a group of people feel the way they do about a brand or product.

Thematic Analysis

Thematic analysis is used to deduce the meaning behind the words people use. This is accomplished by discovering repeating themes in text. These meaningful themes reveal key insights into data and can be quantified, particularly when paired with sentiment analysis . Often, the outcome of thematic analysis is a code frame that captures themes in terms of codes, also called categories. So the process of thematic analysis is also referred to as “coding”. A common use-case for thematic analysis in companies is analysis of customer feedback.

Grounded Theory

Grounded theory is a useful approach when little is known about a subject. Grounded theory starts by formulating a theory around a single data case. This means that the theory is “grounded”. Grounded theory analysis is based on actual data, and not entirely speculative. Then additional cases can be examined to see if they are relevant and can add to the original grounded theory.

Methods of qualitative data analysis; approaches and techniques to qualitative data analysis

Challenges of Qualitative Data Analysis

While Qualitative Data Analysis offers rich insights, it comes with its challenges. Each unique QDA method has its unique hurdles. Let’s take a look at the challenges researchers and analysts might face, depending on the chosen method.

  • Time and Effort (Narrative Analysis): Narrative analysis, which focuses on personal stories, demands patience. Sifting through lengthy narratives to find meaningful insights can be time-consuming, requires dedicated effort.
  • Being Objective (Grounded Theory): Grounded theory, building theories from data, faces the challenges of personal biases. Staying objective while interpreting data is crucial, ensuring conclusions are rooted in the data itself.
  • Complexity (Thematic Analysis): Thematic analysis involves identifying themes within data, a process that can be intricate. Categorizing and understanding themes can be complex, especially when each piece of data varies in context and structure. Thematic Analysis software can simplify this process.
  • Generalizing Findings (Narrative Analysis): Narrative analysis, dealing with individual stories, makes drawing broad challenging. Extending findings from a single narrative to a broader context requires careful consideration.
  • Managing Data (Thematic Analysis): Thematic analysis involves organizing and managing vast amounts of unstructured data, like interview transcripts. Managing this can be a hefty task, requiring effective data management strategies.
  • Skill Level (Grounded Theory): Grounded theory demands specific skills to build theories from the ground up. Finding or training analysts with these skills poses a challenge, requiring investment in building expertise.

Benefits of qualitative data analysis

Qualitative Data Analysis (QDA) is like a versatile toolkit, offering a tailored approach to understanding your data. The benefits it offers are as diverse as the methods. Let’s explore why choosing the right method matters.

  • Tailored Methods for Specific Needs: QDA isn't one-size-fits-all. Depending on your research objectives and the type of data at hand, different methods offer unique benefits. If you want emotive customer stories, narrative analysis paints a strong picture. When you want to explain a score, thematic analysis reveals insightful patterns
  • Flexibility with Thematic Analysis: thematic analysis is like a chameleon in the toolkit of QDA. It adapts well to different types of data and research objectives, making it a top choice for any qualitative analysis.
  • Deeper Understanding, Better Products: QDA helps you dive into people's thoughts and feelings. This deep understanding helps you build products and services that truly matches what people want, ensuring satisfied customers
  • Finding the Unexpected: Qualitative data often reveals surprises that we miss in quantitative data. QDA offers us new ideas and perspectives, for insights we might otherwise miss.
  • Building Effective Strategies: Insights from QDA are like strategic guides. They help businesses in crafting plans that match people’s desires.
  • Creating Genuine Connections: Understanding people’s experiences lets businesses connect on a real level. This genuine connection helps build trust and loyalty, priceless for any business.

How to do Qualitative Data Analysis: 5 steps

Now we are going to show how you can do your own qualitative data analysis. We will guide you through this process step by step. As mentioned earlier, you will learn how to do qualitative data analysis manually , and also automatically using modern qualitative data and thematic analysis software.

To get best value from the analysis process and research process, it’s important to be super clear about the nature and scope of the question that’s being researched. This will help you select the research collection channels that are most likely to help you answer your question.

Depending on if you are a business looking to understand customer sentiment, or an academic surveying a school, your approach to qualitative data analysis will be unique.

Once you’re clear, there’s a sequence to follow. And, though there are differences in the manual and automatic approaches, the process steps are mostly the same.

The use case for our step-by-step guide is a company looking to collect data (customer feedback data), and analyze the customer feedback - in order to improve customer experience. By analyzing the customer feedback the company derives insights about their business and their customers. You can follow these same steps regardless of the nature of your research. Let’s get started.

Step 1: Gather your qualitative data and conduct research (Conduct qualitative research)

The first step of qualitative research is to do data collection. Put simply, data collection is gathering all of your data for analysis. A common situation is when qualitative data is spread across various sources.

Classic methods of gathering qualitative data

Most companies use traditional methods for gathering qualitative data: conducting interviews with research participants, running surveys, and running focus groups. This data is typically stored in documents, CRMs, databases and knowledge bases. It’s important to examine which data is available and needs to be included in your research project, based on its scope.

Using your existing qualitative feedback

As it becomes easier for customers to engage across a range of different channels, companies are gathering increasingly large amounts of both solicited and unsolicited qualitative feedback.

Most organizations have now invested in Voice of Customer programs , support ticketing systems, chatbot and support conversations, emails and even customer Slack chats.

These new channels provide companies with new ways of getting feedback, and also allow the collection of unstructured feedback data at scale.

The great thing about this data is that it contains a wealth of valubale insights and that it’s already there! When you have a new question about user behavior or your customers, you don’t need to create a new research study or set up a focus group. You can find most answers in the data you already have.

Typically, this data is stored in third-party solutions or a central database, but there are ways to export it or connect to a feedback analysis solution through integrations or an API.

Utilize untapped qualitative data channels

There are many online qualitative data sources you may not have considered. For example, you can find useful qualitative data in social media channels like Twitter or Facebook. Online forums, review sites, and online communities such as Discourse or Reddit also contain valuable data about your customers, or research questions.

If you are considering performing a qualitative benchmark analysis against competitors - the internet is your best friend, and review analysis is a great place to start. Gathering feedback in competitor reviews on sites like Trustpilot, G2, Capterra, Better Business Bureau or on app stores is a great way to perform a competitor benchmark analysis.

Customer feedback analysis software often has integrations into social media and review sites, or you could use a solution like DataMiner to scrape the reviews.

G2.com reviews of the product Airtable. You could pull reviews from G2 for your analysis.

Step 2: Connect & organize all your qualitative data

Now you all have this qualitative data but there’s a problem, the data is unstructured. Before feedback can be analyzed and assigned any value, it needs to be organized in a single place. Why is this important? Consistency!

If all data is easily accessible in one place and analyzed in a consistent manner, you will have an easier time summarizing and making decisions based on this data.

The manual approach to organizing your data

The classic method of structuring qualitative data is to plot all the raw data you’ve gathered into a spreadsheet.

Typically, research and support teams would share large Excel sheets and different business units would make sense of the qualitative feedback data on their own. Each team collects and organizes the data in a way that best suits them, which means the feedback tends to be kept in separate silos.

An alternative and a more robust solution is to store feedback in a central database, like Snowflake or Amazon Redshift .

Keep in mind that when you organize your data in this way, you are often preparing it to be imported into another software. If you go the route of a database, you would need to use an API to push the feedback into a third-party software.

Computer-assisted qualitative data analysis software (CAQDAS)

Traditionally within the manual analysis approach (but not always), qualitative data is imported into CAQDAS software for coding.

In the early 2000s, CAQDAS software was popularised by developers such as ATLAS.ti, NVivo and MAXQDA and eagerly adopted by researchers to assist with the organizing and coding of data.  

The benefits of using computer-assisted qualitative data analysis software:

  • Assists in the organizing of your data
  • Opens you up to exploring different interpretations of your data analysis
  • Allows you to share your dataset easier and allows group collaboration (allows for secondary analysis)

However you still need to code the data, uncover the themes and do the analysis yourself. Therefore it is still a manual approach.

The user interface of CAQDAS software 'NVivo'

Organizing your qualitative data in a feedback repository

Another solution to organizing your qualitative data is to upload it into a feedback repository where it can be unified with your other data , and easily searchable and taggable. There are a number of software solutions that act as a central repository for your qualitative research data. Here are a couple solutions that you could investigate:  

  • Dovetail: Dovetail is a research repository with a focus on video and audio transcriptions. You can tag your transcriptions within the platform for theme analysis. You can also upload your other qualitative data such as research reports, survey responses, support conversations, and customer interviews. Dovetail acts as a single, searchable repository. And makes it easier to collaborate with other people around your qualitative research.
  • EnjoyHQ: EnjoyHQ is another research repository with similar functionality to Dovetail. It boasts a more sophisticated search engine, but it has a higher starting subscription cost.

Organizing your qualitative data in a feedback analytics platform

If you have a lot of qualitative customer or employee feedback, from the likes of customer surveys or employee surveys, you will benefit from a feedback analytics platform. A feedback analytics platform is a software that automates the process of both sentiment analysis and thematic analysis . Companies use the integrations offered by these platforms to directly tap into their qualitative data sources (review sites, social media, survey responses, etc.). The data collected is then organized and analyzed consistently within the platform.

If you have data prepared in a spreadsheet, it can also be imported into feedback analytics platforms.

Once all this rich data has been organized within the feedback analytics platform, it is ready to be coded and themed, within the same platform. Thematic is a feedback analytics platform that offers one of the largest libraries of integrations with qualitative data sources.

Some of qualitative data integrations offered by Thematic

Step 3: Coding your qualitative data

Your feedback data is now organized in one place. Either within your spreadsheet, CAQDAS, feedback repository or within your feedback analytics platform. The next step is to code your feedback data so we can extract meaningful insights in the next step.

Coding is the process of labelling and organizing your data in such a way that you can then identify themes in the data, and the relationships between these themes.

To simplify the coding process, you will take small samples of your customer feedback data, come up with a set of codes, or categories capturing themes, and label each piece of feedback, systematically, for patterns and meaning. Then you will take a larger sample of data, revising and refining the codes for greater accuracy and consistency as you go.

If you choose to use a feedback analytics platform, much of this process will be automated and accomplished for you.

The terms to describe different categories of meaning (‘theme’, ‘code’, ‘tag’, ‘category’ etc) can be confusing as they are often used interchangeably.  For clarity, this article will use the term ‘code’.

To code means to identify key words or phrases and assign them to a category of meaning. “I really hate the customer service of this computer software company” would be coded as “poor customer service”.

How to manually code your qualitative data

  • Decide whether you will use deductive or inductive coding. Deductive coding is when you create a list of predefined codes, and then assign them to the qualitative data. Inductive coding is the opposite of this, you create codes based on the data itself. Codes arise directly from the data and you label them as you go. You need to weigh up the pros and cons of each coding method and select the most appropriate.
  • Read through the feedback data to get a broad sense of what it reveals. Now it’s time to start assigning your first set of codes to statements and sections of text.
  • Keep repeating step 2, adding new codes and revising the code description as often as necessary.  Once it has all been coded, go through everything again, to be sure there are no inconsistencies and that nothing has been overlooked.
  • Create a code frame to group your codes. The coding frame is the organizational structure of all your codes. And there are two commonly used types of coding frames, flat, or hierarchical. A hierarchical code frame will make it easier for you to derive insights from your analysis.
  • Based on the number of times a particular code occurs, you can now see the common themes in your feedback data. This is insightful! If ‘bad customer service’ is a common code, it’s time to take action.

We have a detailed guide dedicated to manually coding your qualitative data .

Example of a hierarchical coding frame in qualitative data analysis

Using software to speed up manual coding of qualitative data

An Excel spreadsheet is still a popular method for coding. But various software solutions can help speed up this process. Here are some examples.

  • CAQDAS / NVivo - CAQDAS software has built-in functionality that allows you to code text within their software. You may find the interface the software offers easier for managing codes than a spreadsheet.
  • Dovetail/EnjoyHQ - You can tag transcripts and other textual data within these solutions. As they are also repositories you may find it simpler to keep the coding in one platform.
  • IBM SPSS - SPSS is a statistical analysis software that may make coding easier than in a spreadsheet.
  • Ascribe - Ascribe’s ‘Coder’ is a coding management system. Its user interface will make it easier for you to manage your codes.

Automating the qualitative coding process using thematic analysis software

In solutions which speed up the manual coding process, you still have to come up with valid codes and often apply codes manually to pieces of feedback. But there are also solutions that automate both the discovery and the application of codes.

Advances in machine learning have now made it possible to read, code and structure qualitative data automatically. This type of automated coding is offered by thematic analysis software .

Automation makes it far simpler and faster to code the feedback and group it into themes. By incorporating natural language processing (NLP) into the software, the AI looks across sentences and phrases to identify common themes meaningful statements. Some automated solutions detect repeating patterns and assign codes to them, others make you train the AI by providing examples. You could say that the AI learns the meaning of the feedback on its own.

Thematic automates the coding of qualitative feedback regardless of source. There’s no need to set up themes or categories in advance. Simply upload your data and wait a few minutes. You can also manually edit the codes to further refine their accuracy.  Experiments conducted indicate that Thematic’s automated coding is just as accurate as manual coding .

Paired with sentiment analysis and advanced text analytics - these automated solutions become powerful for deriving quality business or research insights.

You could also build your own , if you have the resources!

The key benefits of using an automated coding solution

Automated analysis can often be set up fast and there’s the potential to uncover things that would never have been revealed if you had given the software a prescribed list of themes to look for.

Because the model applies a consistent rule to the data, it captures phrases or statements that a human eye might have missed.

Complete and consistent analysis of customer feedback enables more meaningful findings. Leading us into step 4.

Step 4: Analyze your data: Find meaningful insights

Now we are going to analyze our data to find insights. This is where we start to answer our research questions. Keep in mind that step 4 and step 5 (tell the story) have some overlap . This is because creating visualizations is both part of analysis process and reporting.

The task of uncovering insights is to scour through the codes that emerge from the data and draw meaningful correlations from them. It is also about making sure each insight is distinct and has enough data to support it.

Part of the analysis is to establish how much each code relates to different demographics and customer profiles, and identify whether there’s any relationship between these data points.

Manually create sub-codes to improve the quality of insights

If your code frame only has one level, you may find that your codes are too broad to be able to extract meaningful insights. This is where it is valuable to create sub-codes to your primary codes. This process is sometimes referred to as meta coding.

Note: If you take an inductive coding approach, you can create sub-codes as you are reading through your feedback data and coding it.

While time-consuming, this exercise will improve the quality of your analysis. Here is an example of what sub-codes could look like.

Example of sub-codes

You need to carefully read your qualitative data to create quality sub-codes. But as you can see, the depth of analysis is greatly improved. By calculating the frequency of these sub-codes you can get insight into which  customer service problems you can immediately address.

Correlate the frequency of codes to customer segments

Many businesses use customer segmentation . And you may have your own respondent segments that you can apply to your qualitative analysis. Segmentation is the practise of dividing customers or research respondents into subgroups.

Segments can be based on:

  • Demographic
  • And any other data type that you care to segment by

It is particularly useful to see the occurrence of codes within your segments. If one of your customer segments is considered unimportant to your business, but they are the cause of nearly all customer service complaints, it may be in your best interest to focus attention elsewhere. This is a useful insight!

Manually visualizing coded qualitative data

There are formulas you can use to visualize key insights in your data. The formulas we will suggest are imperative if you are measuring a score alongside your feedback.

If you are collecting a metric alongside your qualitative data this is a key visualization. Impact answers the question: “What’s the impact of a code on my overall score?”. Using Net Promoter Score (NPS) as an example, first you need to:

  • Calculate overall NPS
  • Calculate NPS in the subset of responses that do not contain that theme
  • Subtract B from A

Then you can use this simple formula to calculate code impact on NPS .

Visualizing qualitative data: Calculating the impact of a code on your score

You can then visualize this data using a bar chart.

You can download our CX toolkit - it includes a template to recreate this.

Trends over time

This analysis can help you answer questions like: “Which codes are linked to decreases or increases in my score over time?”

We need to compare two sequences of numbers: NPS over time and code frequency over time . Using Excel, calculate the correlation between the two sequences, which can be either positive (the more codes the higher the NPS, see picture below), or negative (the more codes the lower the NPS).

Now you need to plot code frequency against the absolute value of code correlation with NPS. Here is the formula:

Analyzing qualitative data: Calculate which codes are linked to increases or decreases in my score

The visualization could look like this:

Visualizing qualitative data trends over time

These are two examples, but there are more. For a third manual formula, and to learn why word clouds are not an insightful form of analysis, read our visualizations article .

Using a text analytics solution to automate analysis

Automated text analytics solutions enable codes and sub-codes to be pulled out of the data automatically. This makes it far faster and easier to identify what’s driving negative or positive results. And to pick up emerging trends and find all manner of rich insights in the data.

Another benefit of AI-driven text analytics software is its built-in capability for sentiment analysis, which provides the emotive context behind your feedback and other qualitative textual data therein.

Thematic provides text analytics that goes further by allowing users to apply their expertise on business context to edit or augment the AI-generated outputs.

Since the move away from manual research is generally about reducing the human element, adding human input to the technology might sound counter-intuitive. However, this is mostly to make sure important business nuances in the feedback aren’t missed during coding. The result is a higher accuracy of analysis. This is sometimes referred to as augmented intelligence .

Codes displayed by volume within Thematic. You can 'manage themes' to introduce human input.

Step 5: Report on your data: Tell the story

The last step of analyzing your qualitative data is to report on it, to tell the story. At this point, the codes are fully developed and the focus is on communicating the narrative to the audience.

A coherent outline of the qualitative research, the findings and the insights is vital for stakeholders to discuss and debate before they can devise a meaningful course of action.

Creating graphs and reporting in Powerpoint

Typically, qualitative researchers take the tried and tested approach of distilling their report into a series of charts, tables and other visuals which are woven into a narrative for presentation in Powerpoint.

Using visualization software for reporting

With data transformation and APIs, the analyzed data can be shared with data visualisation software, such as Power BI or Tableau , Google Studio or Looker. Power BI and Tableau are among the most preferred options.

Visualizing your insights inside a feedback analytics platform

Feedback analytics platforms, like Thematic, incorporate visualisation tools that intuitively turn key data and insights into graphs.  This removes the time consuming work of constructing charts to visually identify patterns and creates more time to focus on building a compelling narrative that highlights the insights, in bite-size chunks, for executive teams to review.

Using a feedback analytics platform with visualization tools means you don’t have to use a separate product for visualizations. You can export graphs into Powerpoints straight from the platforms.

Two examples of qualitative data visualizations within Thematic

Conclusion - Manual or Automated?

There are those who remain deeply invested in the manual approach - because it’s familiar, because they’re reluctant to spend money and time learning new software, or because they’ve been burned by the overpromises of AI.  

For projects that involve small datasets, manual analysis makes sense. For example, if the objective is simply to quantify a simple question like “Do customers prefer X concepts to Y?”. If the findings are being extracted from a small set of focus groups and interviews, sometimes it’s easier to just read them

However, as new generations come into the workplace, it’s technology-driven solutions that feel more comfortable and practical. And the merits are undeniable.  Especially if the objective is to go deeper and understand the ‘why’ behind customers’ preference for X or Y. And even more especially if time and money are considerations.

The ability to collect a free flow of qualitative feedback data at the same time as the metric means AI can cost-effectively scan, crunch, score and analyze a ton of feedback from one system in one go. And time-intensive processes like focus groups, or coding, that used to take weeks, can now be completed in a matter of hours or days.

But aside from the ever-present business case to speed things up and keep costs down, there are also powerful research imperatives for automated analysis of qualitative data: namely, accuracy and consistency.

Finding insights hidden in feedback requires consistency, especially in coding.  Not to mention catching all the ‘unknown unknowns’ that can skew research findings and steering clear of cognitive bias.

Some say without manual data analysis researchers won’t get an accurate “feel” for the insights. However, the larger data sets are, the harder it is to sort through the feedback and organize feedback that has been pulled from different places.  And, the more difficult it is to stay on course, the greater the risk of drawing incorrect, or incomplete, conclusions grows.

Though the process steps for qualitative data analysis have remained pretty much unchanged since psychologist Paul Felix Lazarsfeld paved the path a hundred years ago, the impact digital technology has had on types of qualitative feedback data and the approach to the analysis are profound.  

If you want to try an automated feedback analysis solution on your own qualitative data, you can get started with Thematic .

data analysis procedure in research example

Community & Marketing

Tyler manages our community of CX, insights & analytics professionals. Tyler's goal is to help unite insights professionals around common challenges.

We make it easy to discover the customer and product issues that matter.

Unlock the value of feedback at scale, in one platform. Try it for free now!

  • Questions to ask your Feedback Analytics vendor
  • How to end customer churn for good
  • Scalable analysis of NPS verbatims
  • 5 Text analytics approaches
  • How to calculate the ROI of CX

Our experts will show you how Thematic works, how to discover pain points and track the ROI of decisions. To access your free trial, book a personal demo today.

Recent posts

When two major storms wreaked havoc on Auckland and Watercare’s infrastructurem the utility went through a CX crisis. With a massive influx of calls to their support center, Thematic helped them get inisghts from this data to forge a new approach to restore services and satisfaction levels.

Become a qualitative theming pro! Creating a perfect code frame is hard, but thematic analysis software makes the process much easier.

Qualtrics is one of the most well-known and powerful Customer Feedback Management platforms. But even so, it has limitations. We recently hosted a live panel where data analysts from two well-known brands shared their experiences with Qualtrics, and how they extended this platform’s capabilities. Below, we’ll share the

Analyst Answers

Data & Finance for Work & Life

data analysis types, methods, and techniques tree diagram

Data Analysis: Types, Methods & Techniques (a Complete List)

( Updated Version )

While the term sounds intimidating, “data analysis” is nothing more than making sense of information in a table. It consists of filtering, sorting, grouping, and manipulating data tables with basic algebra and statistics.

In fact, you don’t need experience to understand the basics. You have already worked with data extensively in your life, and “analysis” is nothing more than a fancy word for good sense and basic logic.

Over time, people have intuitively categorized the best logical practices for treating data. These categories are what we call today types , methods , and techniques .

This article provides a comprehensive list of types, methods, and techniques, and explains the difference between them.

For a practical intro to data analysis (including types, methods, & techniques), check out our Intro to Data Analysis eBook for free.

Descriptive, Diagnostic, Predictive, & Prescriptive Analysis

If you Google “types of data analysis,” the first few results will explore descriptive , diagnostic , predictive , and prescriptive analysis. Why? Because these names are easy to understand and are used a lot in “the real world.”

Descriptive analysis is an informational method, diagnostic analysis explains “why” a phenomenon occurs, predictive analysis seeks to forecast the result of an action, and prescriptive analysis identifies solutions to a specific problem.

That said, these are only four branches of a larger analytical tree.

Good data analysts know how to position these four types within other analytical methods and tactics, allowing them to leverage strengths and weaknesses in each to uproot the most valuable insights.

Let’s explore the full analytical tree to understand how to appropriately assess and apply these four traditional types.

Tree diagram of Data Analysis Types, Methods, and Techniques

Here’s a picture to visualize the structure and hierarchy of data analysis types, methods, and techniques.

If it’s too small you can view the picture in a new tab . Open it to follow along!

data analysis procedure in research example

Note: basic descriptive statistics such as mean , median , and mode , as well as standard deviation , are not shown because most people are already familiar with them. In the diagram, they would fall under the “descriptive” analysis type.

Tree Diagram Explained

The highest-level classification of data analysis is quantitative vs qualitative . Quantitative implies numbers while qualitative implies information other than numbers.

Quantitative data analysis then splits into mathematical analysis and artificial intelligence (AI) analysis . Mathematical types then branch into descriptive , diagnostic , predictive , and prescriptive .

Methods falling under mathematical analysis include clustering , classification , forecasting , and optimization . Qualitative data analysis methods include content analysis , narrative analysis , discourse analysis , framework analysis , and/or grounded theory .

Moreover, mathematical techniques include regression , Nïave Bayes , Simple Exponential Smoothing , cohorts , factors , linear discriminants , and more, whereas techniques falling under the AI type include artificial neural networks , decision trees , evolutionary programming , and fuzzy logic . Techniques under qualitative analysis include text analysis , coding , idea pattern analysis , and word frequency .

It’s a lot to remember! Don’t worry, once you understand the relationship and motive behind all these terms, it’ll be like riding a bike.

We’ll move down the list from top to bottom and I encourage you to open the tree diagram above in a new tab so you can follow along .

But first, let’s just address the elephant in the room: what’s the difference between methods and techniques anyway?

Difference between methods and techniques

Though often used interchangeably, methods ands techniques are not the same. By definition, methods are the process by which techniques are applied, and techniques are the practical application of those methods.

For example, consider driving. Methods include staying in your lane, stopping at a red light, and parking in a spot. Techniques include turning the steering wheel, braking, and pushing the gas pedal.

Data sets: observations and fields

It’s important to understand the basic structure of data tables to comprehend the rest of the article. A data set consists of one far-left column containing observations, then a series of columns containing the fields (aka “traits” or “characteristics”) that describe each observations. For example, imagine we want a data table for fruit. It might look like this:

The fruit (observation) (field1)Avg. diameter (field 2)Avg. time to eat (field 3)
Watermelon20 lbs (9 kg)16 inch (40 cm)20 minutes
Apple.33 lbs (.15 kg)4 inch (8 cm)5 minutes
Orange.30 lbs (.14 kg)4 inch (8 cm)5 minutes

Now let’s turn to types, methods, and techniques. Each heading below consists of a description, relative importance, the nature of data it explores, and the motivation for using it.

Quantitative Analysis

  • It accounts for more than 50% of all data analysis and is by far the most widespread and well-known type of data analysis.
  • As you have seen, it holds descriptive, diagnostic, predictive, and prescriptive methods, which in turn hold some of the most important techniques available today, such as clustering and forecasting.
  • It can be broken down into mathematical and AI analysis.
  • Importance : Very high . Quantitative analysis is a must for anyone interesting in becoming or improving as a data analyst.
  • Nature of Data: data treated under quantitative analysis is, quite simply, quantitative. It encompasses all numeric data.
  • Motive: to extract insights. (Note: we’re at the top of the pyramid, this gets more insightful as we move down.)

Qualitative Analysis

  • It accounts for less than 30% of all data analysis and is common in social sciences .
  • It can refer to the simple recognition of qualitative elements, which is not analytic in any way, but most often refers to methods that assign numeric values to non-numeric data for analysis.
  • Because of this, some argue that it’s ultimately a quantitative type.
  • Importance: Medium. In general, knowing qualitative data analysis is not common or even necessary for corporate roles. However, for researchers working in social sciences, its importance is very high .
  • Nature of Data: data treated under qualitative analysis is non-numeric. However, as part of the analysis, analysts turn non-numeric data into numbers, at which point many argue it is no longer qualitative analysis.
  • Motive: to extract insights. (This will be more important as we move down the pyramid.)

Mathematical Analysis

  • Description: mathematical data analysis is a subtype of qualitative data analysis that designates methods and techniques based on statistics, algebra, and logical reasoning to extract insights. It stands in opposition to artificial intelligence analysis.
  • Importance: Very High. The most widespread methods and techniques fall under mathematical analysis. In fact, it’s so common that many people use “quantitative” and “mathematical” analysis interchangeably.
  • Nature of Data: numeric. By definition, all data under mathematical analysis are numbers.
  • Motive: to extract measurable insights that can be used to act upon.

Artificial Intelligence & Machine Learning Analysis

  • Description: artificial intelligence and machine learning analyses designate techniques based on the titular skills. They are not traditionally mathematical, but they are quantitative since they use numbers. Applications of AI & ML analysis techniques are developing, but they’re not yet mainstream enough to show promise across the field.
  • Importance: Medium . As of today (September 2020), you don’t need to be fluent in AI & ML data analysis to be a great analyst. BUT, if it’s a field that interests you, learn it. Many believe that in 10 year’s time its importance will be very high .
  • Nature of Data: numeric.
  • Motive: to create calculations that build on themselves in order and extract insights without direct input from a human.

Descriptive Analysis

  • Description: descriptive analysis is a subtype of mathematical data analysis that uses methods and techniques to provide information about the size, dispersion, groupings, and behavior of data sets. This may sounds complicated, but just think about mean, median, and mode: all three are types of descriptive analysis. They provide information about the data set. We’ll look at specific techniques below.
  • Importance: Very high. Descriptive analysis is among the most commonly used data analyses in both corporations and research today.
  • Nature of Data: the nature of data under descriptive statistics is sets. A set is simply a collection of numbers that behaves in predictable ways. Data reflects real life, and there are patterns everywhere to be found. Descriptive analysis describes those patterns.
  • Motive: the motive behind descriptive analysis is to understand how numbers in a set group together, how far apart they are from each other, and how often they occur. As with most statistical analysis, the more data points there are, the easier it is to describe the set.

Diagnostic Analysis

  • Description: diagnostic analysis answers the question “why did it happen?” It is an advanced type of mathematical data analysis that manipulates multiple techniques, but does not own any single one. Analysts engage in diagnostic analysis when they try to explain why.
  • Importance: Very high. Diagnostics are probably the most important type of data analysis for people who don’t do analysis because they’re valuable to anyone who’s curious. They’re most common in corporations, as managers often only want to know the “why.”
  • Nature of Data : data under diagnostic analysis are data sets. These sets in themselves are not enough under diagnostic analysis. Instead, the analyst must know what’s behind the numbers in order to explain “why.” That’s what makes diagnostics so challenging yet so valuable.
  • Motive: the motive behind diagnostics is to diagnose — to understand why.

Predictive Analysis

  • Description: predictive analysis uses past data to project future data. It’s very often one of the first kinds of analysis new researchers and corporate analysts use because it is intuitive. It is a subtype of the mathematical type of data analysis, and its three notable techniques are regression, moving average, and exponential smoothing.
  • Importance: Very high. Predictive analysis is critical for any data analyst working in a corporate environment. Companies always want to know what the future will hold — especially for their revenue.
  • Nature of Data: Because past and future imply time, predictive data always includes an element of time. Whether it’s minutes, hours, days, months, or years, we call this time series data . In fact, this data is so important that I’ll mention it twice so you don’t forget: predictive analysis uses time series data .
  • Motive: the motive for investigating time series data with predictive analysis is to predict the future in the most analytical way possible.

Prescriptive Analysis

  • Description: prescriptive analysis is a subtype of mathematical analysis that answers the question “what will happen if we do X?” It’s largely underestimated in the data analysis world because it requires diagnostic and descriptive analyses to be done before it even starts. More than simple predictive analysis, prescriptive analysis builds entire data models to show how a simple change could impact the ensemble.
  • Importance: High. Prescriptive analysis is most common under the finance function in many companies. Financial analysts use it to build a financial model of the financial statements that show how that data will change given alternative inputs.
  • Nature of Data: the nature of data in prescriptive analysis is data sets. These data sets contain patterns that respond differently to various inputs. Data that is useful for prescriptive analysis contains correlations between different variables. It’s through these correlations that we establish patterns and prescribe action on this basis. This analysis cannot be performed on data that exists in a vacuum — it must be viewed on the backdrop of the tangibles behind it.
  • Motive: the motive for prescriptive analysis is to establish, with an acceptable degree of certainty, what results we can expect given a certain action. As you might expect, this necessitates that the analyst or researcher be aware of the world behind the data, not just the data itself.

Clustering Method

  • Description: the clustering method groups data points together based on their relativeness closeness to further explore and treat them based on these groupings. There are two ways to group clusters: intuitively and statistically (or K-means).
  • Importance: Very high. Though most corporate roles group clusters intuitively based on management criteria, a solid understanding of how to group them mathematically is an excellent descriptive and diagnostic approach to allow for prescriptive analysis thereafter.
  • Nature of Data : the nature of data useful for clustering is sets with 1 or more data fields. While most people are used to looking at only two dimensions (x and y), clustering becomes more accurate the more fields there are.
  • Motive: the motive for clustering is to understand how data sets group and to explore them further based on those groups.
  • Here’s an example set:

data analysis procedure in research example

Classification Method

  • Description: the classification method aims to separate and group data points based on common characteristics . This can be done intuitively or statistically.
  • Importance: High. While simple on the surface, classification can become quite complex. It’s very valuable in corporate and research environments, but can feel like its not worth the work. A good analyst can execute it quickly to deliver results.
  • Nature of Data: the nature of data useful for classification is data sets. As we will see, it can be used on qualitative data as well as quantitative. This method requires knowledge of the substance behind the data, not just the numbers themselves.
  • Motive: the motive for classification is group data not based on mathematical relationships (which would be clustering), but by predetermined outputs. This is why it’s less useful for diagnostic analysis, and more useful for prescriptive analysis.

Forecasting Method

  • Description: the forecasting method uses time past series data to forecast the future.
  • Importance: Very high. Forecasting falls under predictive analysis and is arguably the most common and most important method in the corporate world. It is less useful in research, which prefers to understand the known rather than speculate about the future.
  • Nature of Data: data useful for forecasting is time series data, which, as we’ve noted, always includes a variable of time.
  • Motive: the motive for the forecasting method is the same as that of prescriptive analysis: the confidently estimate future values.

Optimization Method

  • Description: the optimization method maximized or minimizes values in a set given a set of criteria. It is arguably most common in prescriptive analysis. In mathematical terms, it is maximizing or minimizing a function given certain constraints.
  • Importance: Very high. The idea of optimization applies to more analysis types than any other method. In fact, some argue that it is the fundamental driver behind data analysis. You would use it everywhere in research and in a corporation.
  • Nature of Data: the nature of optimizable data is a data set of at least two points.
  • Motive: the motive behind optimization is to achieve the best result possible given certain conditions.

Content Analysis Method

  • Description: content analysis is a method of qualitative analysis that quantifies textual data to track themes across a document. It’s most common in academic fields and in social sciences, where written content is the subject of inquiry.
  • Importance: High. In a corporate setting, content analysis as such is less common. If anything Nïave Bayes (a technique we’ll look at below) is the closest corporations come to text. However, it is of the utmost importance for researchers. If you’re a researcher, check out this article on content analysis .
  • Nature of Data: data useful for content analysis is textual data.
  • Motive: the motive behind content analysis is to understand themes expressed in a large text

Narrative Analysis Method

  • Description: narrative analysis is a method of qualitative analysis that quantifies stories to trace themes in them. It’s differs from content analysis because it focuses on stories rather than research documents, and the techniques used are slightly different from those in content analysis (very nuances and outside the scope of this article).
  • Importance: Low. Unless you are highly specialized in working with stories, narrative analysis rare.
  • Nature of Data: the nature of the data useful for the narrative analysis method is narrative text.
  • Motive: the motive for narrative analysis is to uncover hidden patterns in narrative text.

Discourse Analysis Method

  • Description: the discourse analysis method falls under qualitative analysis and uses thematic coding to trace patterns in real-life discourse. That said, real-life discourse is oral, so it must first be transcribed into text.
  • Importance: Low. Unless you are focused on understand real-world idea sharing in a research setting, this kind of analysis is less common than the others on this list.
  • Nature of Data: the nature of data useful in discourse analysis is first audio files, then transcriptions of those audio files.
  • Motive: the motive behind discourse analysis is to trace patterns of real-world discussions. (As a spooky sidenote, have you ever felt like your phone microphone was listening to you and making reading suggestions? If it was, the method was discourse analysis.)

Framework Analysis Method

  • Description: the framework analysis method falls under qualitative analysis and uses similar thematic coding techniques to content analysis. However, where content analysis aims to discover themes, framework analysis starts with a framework and only considers elements that fall in its purview.
  • Importance: Low. As with the other textual analysis methods, framework analysis is less common in corporate settings. Even in the world of research, only some use it. Strangely, it’s very common for legislative and political research.
  • Nature of Data: the nature of data useful for framework analysis is textual.
  • Motive: the motive behind framework analysis is to understand what themes and parts of a text match your search criteria.

Grounded Theory Method

  • Description: the grounded theory method falls under qualitative analysis and uses thematic coding to build theories around those themes.
  • Importance: Low. Like other qualitative analysis techniques, grounded theory is less common in the corporate world. Even among researchers, you would be hard pressed to find many using it. Though powerful, it’s simply too rare to spend time learning.
  • Nature of Data: the nature of data useful in the grounded theory method is textual.
  • Motive: the motive of grounded theory method is to establish a series of theories based on themes uncovered from a text.

Clustering Technique: K-Means

  • Description: k-means is a clustering technique in which data points are grouped in clusters that have the closest means. Though not considered AI or ML, it inherently requires the use of supervised learning to reevaluate clusters as data points are added. Clustering techniques can be used in diagnostic, descriptive, & prescriptive data analyses.
  • Importance: Very important. If you only take 3 things from this article, k-means clustering should be part of it. It is useful in any situation where n observations have multiple characteristics and we want to put them in groups.
  • Nature of Data: the nature of data is at least one characteristic per observation, but the more the merrier.
  • Motive: the motive for clustering techniques such as k-means is to group observations together and either understand or react to them.

Regression Technique

  • Description: simple and multivariable regressions use either one independent variable or combination of multiple independent variables to calculate a correlation to a single dependent variable using constants. Regressions are almost synonymous with correlation today.
  • Importance: Very high. Along with clustering, if you only take 3 things from this article, regression techniques should be part of it. They’re everywhere in corporate and research fields alike.
  • Nature of Data: the nature of data used is regressions is data sets with “n” number of observations and as many variables as are reasonable. It’s important, however, to distinguish between time series data and regression data. You cannot use regressions or time series data without accounting for time. The easier way is to use techniques under the forecasting method.
  • Motive: The motive behind regression techniques is to understand correlations between independent variable(s) and a dependent one.

Nïave Bayes Technique

  • Description: Nïave Bayes is a classification technique that uses simple probability to classify items based previous classifications. In plain English, the formula would be “the chance that thing with trait x belongs to class c depends on (=) the overall chance of trait x belonging to class c, multiplied by the overall chance of class c, divided by the overall chance of getting trait x.” As a formula, it’s P(c|x) = P(x|c) * P(c) / P(x).
  • Importance: High. Nïave Bayes is a very common, simplistic classification techniques because it’s effective with large data sets and it can be applied to any instant in which there is a class. Google, for example, might use it to group webpages into groups for certain search engine queries.
  • Nature of Data: the nature of data for Nïave Bayes is at least one class and at least two traits in a data set.
  • Motive: the motive behind Nïave Bayes is to classify observations based on previous data. It’s thus considered part of predictive analysis.

Cohorts Technique

  • Description: cohorts technique is a type of clustering method used in behavioral sciences to separate users by common traits. As with clustering, it can be done intuitively or mathematically, the latter of which would simply be k-means.
  • Importance: Very high. With regard to resembles k-means, the cohort technique is more of a high-level counterpart. In fact, most people are familiar with it as a part of Google Analytics. It’s most common in marketing departments in corporations, rather than in research.
  • Nature of Data: the nature of cohort data is data sets in which users are the observation and other fields are used as defining traits for each cohort.
  • Motive: the motive for cohort analysis techniques is to group similar users and analyze how you retain them and how the churn.

Factor Technique

  • Description: the factor analysis technique is a way of grouping many traits into a single factor to expedite analysis. For example, factors can be used as traits for Nïave Bayes classifications instead of more general fields.
  • Importance: High. While not commonly employed in corporations, factor analysis is hugely valuable. Good data analysts use it to simplify their projects and communicate them more clearly.
  • Nature of Data: the nature of data useful in factor analysis techniques is data sets with a large number of fields on its observations.
  • Motive: the motive for using factor analysis techniques is to reduce the number of fields in order to more quickly analyze and communicate findings.

Linear Discriminants Technique

  • Description: linear discriminant analysis techniques are similar to regressions in that they use one or more independent variable to determine a dependent variable; however, the linear discriminant technique falls under a classifier method since it uses traits as independent variables and class as a dependent variable. In this way, it becomes a classifying method AND a predictive method.
  • Importance: High. Though the analyst world speaks of and uses linear discriminants less commonly, it’s a highly valuable technique to keep in mind as you progress in data analysis.
  • Nature of Data: the nature of data useful for the linear discriminant technique is data sets with many fields.
  • Motive: the motive for using linear discriminants is to classify observations that would be otherwise too complex for simple techniques like Nïave Bayes.

Exponential Smoothing Technique

  • Description: exponential smoothing is a technique falling under the forecasting method that uses a smoothing factor on prior data in order to predict future values. It can be linear or adjusted for seasonality. The basic principle behind exponential smoothing is to use a percent weight (value between 0 and 1 called alpha) on more recent values in a series and a smaller percent weight on less recent values. The formula is f(x) = current period value * alpha + previous period value * 1-alpha.
  • Importance: High. Most analysts still use the moving average technique (covered next) for forecasting, though it is less efficient than exponential moving, because it’s easy to understand. However, good analysts will have exponential smoothing techniques in their pocket to increase the value of their forecasts.
  • Nature of Data: the nature of data useful for exponential smoothing is time series data . Time series data has time as part of its fields .
  • Motive: the motive for exponential smoothing is to forecast future values with a smoothing variable.

Moving Average Technique

  • Description: the moving average technique falls under the forecasting method and uses an average of recent values to predict future ones. For example, to predict rainfall in April, you would take the average of rainfall from January to March. It’s simple, yet highly effective.
  • Importance: Very high. While I’m personally not a huge fan of moving averages due to their simplistic nature and lack of consideration for seasonality, they’re the most common forecasting technique and therefore very important.
  • Nature of Data: the nature of data useful for moving averages is time series data .
  • Motive: the motive for moving averages is to predict future values is a simple, easy-to-communicate way.

Neural Networks Technique

  • Description: neural networks are a highly complex artificial intelligence technique that replicate a human’s neural analysis through a series of hyper-rapid computations and comparisons that evolve in real time. This technique is so complex that an analyst must use computer programs to perform it.
  • Importance: Medium. While the potential for neural networks is theoretically unlimited, it’s still little understood and therefore uncommon. You do not need to know it by any means in order to be a data analyst.
  • Nature of Data: the nature of data useful for neural networks is data sets of astronomical size, meaning with 100s of 1000s of fields and the same number of row at a minimum .
  • Motive: the motive for neural networks is to understand wildly complex phenomenon and data to thereafter act on it.

Decision Tree Technique

  • Description: the decision tree technique uses artificial intelligence algorithms to rapidly calculate possible decision pathways and their outcomes on a real-time basis. It’s so complex that computer programs are needed to perform it.
  • Importance: Medium. As with neural networks, decision trees with AI are too little understood and are therefore uncommon in corporate and research settings alike.
  • Nature of Data: the nature of data useful for the decision tree technique is hierarchical data sets that show multiple optional fields for each preceding field.
  • Motive: the motive for decision tree techniques is to compute the optimal choices to make in order to achieve a desired result.

Evolutionary Programming Technique

  • Description: the evolutionary programming technique uses a series of neural networks, sees how well each one fits a desired outcome, and selects only the best to test and retest. It’s called evolutionary because is resembles the process of natural selection by weeding out weaker options.
  • Importance: Medium. As with the other AI techniques, evolutionary programming just isn’t well-understood enough to be usable in many cases. It’s complexity also makes it hard to explain in corporate settings and difficult to defend in research settings.
  • Nature of Data: the nature of data in evolutionary programming is data sets of neural networks, or data sets of data sets.
  • Motive: the motive for using evolutionary programming is similar to decision trees: understanding the best possible option from complex data.
  • Video example :

Fuzzy Logic Technique

  • Description: fuzzy logic is a type of computing based on “approximate truths” rather than simple truths such as “true” and “false.” It is essentially two tiers of classification. For example, to say whether “Apples are good,” you need to first classify that “Good is x, y, z.” Only then can you say apples are good. Another way to see it helping a computer see truth like humans do: “definitely true, probably true, maybe true, probably false, definitely false.”
  • Importance: Medium. Like the other AI techniques, fuzzy logic is uncommon in both research and corporate settings, which means it’s less important in today’s world.
  • Nature of Data: the nature of fuzzy logic data is huge data tables that include other huge data tables with a hierarchy including multiple subfields for each preceding field.
  • Motive: the motive of fuzzy logic to replicate human truth valuations in a computer is to model human decisions based on past data. The obvious possible application is marketing.

Text Analysis Technique

  • Description: text analysis techniques fall under the qualitative data analysis type and use text to extract insights.
  • Importance: Medium. Text analysis techniques, like all the qualitative analysis type, are most valuable for researchers.
  • Nature of Data: the nature of data useful in text analysis is words.
  • Motive: the motive for text analysis is to trace themes in a text across sets of very long documents, such as books.

Coding Technique

  • Description: the coding technique is used in textual analysis to turn ideas into uniform phrases and analyze the number of times and the ways in which those ideas appear. For this reason, some consider it a quantitative technique as well. You can learn more about coding and the other qualitative techniques here .
  • Importance: Very high. If you’re a researcher working in social sciences, coding is THE analysis techniques, and for good reason. It’s a great way to add rigor to analysis. That said, it’s less common in corporate settings.
  • Nature of Data: the nature of data useful for coding is long text documents.
  • Motive: the motive for coding is to make tracing ideas on paper more than an exercise of the mind by quantifying it and understanding is through descriptive methods.

Idea Pattern Technique

  • Description: the idea pattern analysis technique fits into coding as the second step of the process. Once themes and ideas are coded, simple descriptive analysis tests may be run. Some people even cluster the ideas!
  • Importance: Very high. If you’re a researcher, idea pattern analysis is as important as the coding itself.
  • Nature of Data: the nature of data useful for idea pattern analysis is already coded themes.
  • Motive: the motive for the idea pattern technique is to trace ideas in otherwise unmanageably-large documents.

Word Frequency Technique

  • Description: word frequency is a qualitative technique that stands in opposition to coding and uses an inductive approach to locate specific words in a document in order to understand its relevance. Word frequency is essentially the descriptive analysis of qualitative data because it uses stats like mean, median, and mode to gather insights.
  • Importance: High. As with the other qualitative approaches, word frequency is very important in social science research, but less so in corporate settings.
  • Nature of Data: the nature of data useful for word frequency is long, informative documents.
  • Motive: the motive for word frequency is to locate target words to determine the relevance of a document in question.

Types of data analysis in research

Types of data analysis in research methodology include every item discussed in this article. As a list, they are:

  • Quantitative
  • Qualitative
  • Mathematical
  • Machine Learning and AI
  • Descriptive
  • Prescriptive
  • Classification
  • Forecasting
  • Optimization
  • Grounded theory
  • Artificial Neural Networks
  • Decision Trees
  • Evolutionary Programming
  • Fuzzy Logic
  • Text analysis
  • Idea Pattern Analysis
  • Word Frequency Analysis
  • Nïave Bayes
  • Exponential smoothing
  • Moving average
  • Linear discriminant

Types of data analysis in qualitative research

As a list, the types of data analysis in qualitative research are the following methods:

Types of data analysis in quantitative research

As a list, the types of data analysis in quantitative research are:

Data analysis methods

As a list, data analysis methods are:

  • Content (qualitative)
  • Narrative (qualitative)
  • Discourse (qualitative)
  • Framework (qualitative)
  • Grounded theory (qualitative)

Quantitative data analysis methods

As a list, quantitative data analysis methods are:

Tabular View of Data Analysis Types, Methods, and Techniques

Types (Numeric or Non-numeric)Quantitative
Qualitative
Types tier 2 (Traditional Numeric or New Numeric)Mathematical
Artificial Intelligence (AI)
Types tier 3 (Informative Nature)Descriptive
Diagnostic
Predictive
Prescriptive
MethodsClustering
Classification
Forecasting
Optimization
Narrative analysis
Discourse analysis
Framework analysis
Grounded theory
TechniquesClustering (doubles as technique)
Regression (linear and multivariable)
Nïave Bayes
Cohorts
Factors
Linear Discriminants
Exponential smoothing
Moving average
Neural networks
Decision trees
Evolutionary programming
Fuzzy logic
Text analysis
Coding
Idea pattern analysis
Word frequency

About the Author

Noah is the founder & Editor-in-Chief at AnalystAnswers. He is a transatlantic professional and entrepreneur with 5+ years of corporate finance and data analytics experience, as well as 3+ years in consumer financial products and business software. He started AnalystAnswers to provide aspiring professionals with accessible explanations of otherwise dense finance and data concepts. Noah believes everyone can benefit from an analytical mindset in growing digital world. When he's not busy at work, Noah likes to explore new European cities, exercise, and spend time with friends and family.

File available immediately.

data analysis procedure in research example

Notice: JavaScript is required for this content.

data analysis procedure in research example

Business growth

Business tips

What is data analysis? Examples and how to get started

A hero image with an icon of a line graph / chart

Even with years of professional experience working with data, the term "data analysis" still sets off a panic button in my soul. And yes, when it comes to serious data analysis for your business, you'll eventually want data scientists on your side. But if you're just getting started, no panic attacks are required.

Table of contents:

Quick review: What is data analysis?

Data analysis is the process of examining, filtering, adapting, and modeling data to help solve problems. Data analysis helps determine what is and isn't working, so you can make the changes needed to achieve your business goals. 

Keep in mind that data analysis includes analyzing both quantitative data (e.g., profits and sales) and qualitative data (e.g., surveys and case studies) to paint the whole picture. Here are two simple examples (of a nuanced topic) to show you what I mean.

An example of quantitative data analysis is an online jewelry store owner using inventory data to forecast and improve reordering accuracy. The owner looks at their sales from the past six months and sees that, on average, they sold 210 gold pieces and 105 silver pieces per month, but they only had 100 gold pieces and 100 silver pieces in stock. By collecting and analyzing inventory data on these SKUs, they're forecasting to improve reordering accuracy. The next time they order inventory, they order twice as many gold pieces as silver to meet customer demand.

An example of qualitative data analysis is a fitness studio owner collecting customer feedback to improve class offerings. The studio owner sends out an open-ended survey asking customers what types of exercises they enjoy the most. The owner then performs qualitative content analysis to identify the most frequently suggested exercises and incorporates these into future workout classes.

Why is data analysis important?

Here's why it's worth implementing data analysis for your business:

Understand your target audience: You might think you know how to best target your audience, but are your assumptions backed by data? Data analysis can help answer questions like, "What demographics define my target audience?" or "What is my audience motivated by?"

Inform decisions: You don't need to toss and turn over a decision when the data points clearly to the answer. For instance, a restaurant could analyze which dishes on the menu are selling the most, helping them decide which ones to keep and which ones to change.

Adjust budgets: Similarly, data analysis can highlight areas in your business that are performing well and are worth investing more in, as well as areas that aren't generating enough revenue and should be cut. For example, a B2B software company might discover their product for enterprises is thriving while their small business solution lags behind. This discovery could prompt them to allocate more budget toward the enterprise product, resulting in better resource utilization.

Identify and solve problems: Let's say a cell phone manufacturer notices data showing a lot of customers returning a certain model. When they investigate, they find that model also happens to have the highest number of crashes. Once they identify and solve the technical issue, they can reduce the number of returns.

Types of data analysis (with examples)

There are five main types of data analysis—with increasingly scary-sounding names. Each one serves a different purpose, so take a look to see which makes the most sense for your situation. It's ok if you can't pronounce the one you choose. 

Types of data analysis including text analysis, statistical analysis, diagnostic analysis, predictive analysis, and prescriptive analysis.

Text analysis: What is happening?

Here are a few methods used to perform text analysis, to give you a sense of how it's different from a human reading through the text: 

Word frequency identifies the most frequently used words. For example, a restaurant monitors social media mentions and measures the frequency of positive and negative keywords like "delicious" or "expensive" to determine how customers feel about their experience. 

Language detection indicates the language of text. For example, a global software company may use language detection on support tickets to connect customers with the appropriate agent. 

Keyword extraction automatically identifies the most used terms. For example, instead of sifting through thousands of reviews, a popular brand uses a keyword extractor to summarize the words or phrases that are most relevant. 

Statistical analysis: What happened?

Statistical analysis pulls past data to identify meaningful trends. Two primary categories of statistical analysis exist: descriptive and inferential.

Descriptive analysis

Here are a few methods used to perform descriptive analysis: 

Measures of frequency identify how frequently an event occurs. For example, a popular coffee chain sends out a survey asking customers what their favorite holiday drink is and uses measures of frequency to determine how often a particular drink is selected. 

Measures of central tendency use mean, median, and mode to identify results. For example, a dating app company might use measures of central tendency to determine the average age of its users.

Measures of dispersion measure how data is distributed across a range. For example, HR may use measures of dispersion to determine what salary to offer in a given field. 

Inferential analysis

Inferential analysis uses a sample of data to draw conclusions about a much larger population. This type of analysis is used when the population you're interested in analyzing is very large. 

Here are a few methods used when performing inferential analysis: 

Hypothesis testing identifies which variables impact a particular topic. For example, a business uses hypothesis testing to determine if increased sales were the result of a specific marketing campaign. 

Regression analysis shows the effect of independent variables on a dependent variable. For example, a rental car company may use regression analysis to determine the relationship between wait times and number of bad reviews. 

Diagnostic analysis: Why did it happen?

Diagnostic analysis, also referred to as root cause analysis, uncovers the causes of certain events or results. 

Here are a few methods used to perform diagnostic analysis: 

Time-series analysis analyzes data collected over a period of time. A retail store may use time-series analysis to determine that sales increase between October and December every year. 

Correlation analysis determines the strength of the relationship between variables. For example, a local ice cream shop may determine that as the temperature in the area rises, so do ice cream sales. 

Predictive analysis: What is likely to happen?

Predictive analysis aims to anticipate future developments and events. By analyzing past data, companies can predict future scenarios and make strategic decisions.  

Here are a few methods used to perform predictive analysis: 

Decision trees map out possible courses of action and outcomes. For example, a business may use a decision tree when deciding whether to downsize or expand. 

Prescriptive analysis: What action should we take?

The highest level of analysis, prescriptive analysis, aims to find the best action plan. Typically, AI tools model different outcomes to predict the best approach. While these tools serve to provide insight, they don't replace human consideration, so always use your human brain before going with the conclusion of your prescriptive analysis. Otherwise, your GPS might drive you into a lake.

Here are a few methods used to perform prescriptive analysis: 

Algorithms are used in technology to perform specific tasks. For example, banks use prescriptive algorithms to monitor customers' spending and recommend that they deactivate their credit card if fraud is suspected. 

Data analysis process: How to get started

The actual analysis is just one step in a much bigger process of using data to move your business forward. Here's a quick look at all the steps you need to take to make sure you're making informed decisions. 

Circle chart with data decision, data collection, data cleaning, data analysis, data interpretation, and data visualization.

Data decision

As with almost any project, the first step is to determine what problem you're trying to solve through data analysis. 

Make sure you get specific here. For example, a food delivery service may want to understand why customers are canceling their subscriptions. But to enable the most effective data analysis, they should pose a more targeted question, such as "How can we reduce customer churn without raising costs?" 

Data collection

Next, collect the required data from both internal and external sources. 

Internal data comes from within your business (think CRM software, internal reports, and archives), and helps you understand your business and processes.

External data originates from outside of the company (surveys, questionnaires, public data) and helps you understand your industry and your customers. 

Data cleaning

Data can be seriously misleading if it's not clean. So before you analyze, make sure you review the data you collected.  Depending on the type of data you have, cleanup will look different, but it might include: 

Removing unnecessary information 

Addressing structural errors like misspellings

Deleting duplicates

Trimming whitespace

Human checking for accuracy 

Data analysis

Now that you've compiled and cleaned the data, use one or more of the above types of data analysis to find relationships, patterns, and trends. 

Data analysis tools can speed up the data analysis process and remove the risk of inevitable human error. Here are some examples.

Spreadsheets sort, filter, analyze, and visualize data. 

Structured query language (SQL) tools manage and extract data in relational databases. 

Data interpretation

After you analyze the data, you'll need to go back to the original question you posed and draw conclusions from your findings. Here are some common pitfalls to avoid:

Correlation vs. causation: Just because two variables are associated doesn't mean they're necessarily related or dependent on one another. 

Confirmation bias: This occurs when you interpret data in a way that confirms your own preconceived notions. To avoid this, have multiple people interpret the data. 

Small sample size: If your sample size is too small or doesn't represent the demographics of your customers, you may get misleading results. If you run into this, consider widening your sample size to give you a more accurate representation. 

Data visualization

Automate your data collection, frequently asked questions.

Need a quick summary or still have a few nagging data analysis questions? I'm here for you.

What are the five types of data analysis?

The five types of data analysis are text analysis, statistical analysis, diagnostic analysis, predictive analysis, and prescriptive analysis. Each type offers a unique lens for understanding data: text analysis provides insights into text-based content, statistical analysis focuses on numerical trends, diagnostic analysis looks into problem causes, predictive analysis deals with what may happen in the future, and prescriptive analysis gives actionable recommendations.

What is the data analysis process?

The data analysis process involves data decision, collection, cleaning, analysis, interpretation, and visualization. Every stage comes together to transform raw data into meaningful insights. Decision determines what data to collect, collection gathers the relevant information, cleaning ensures accuracy, analysis uncovers patterns, interpretation assigns meaning, and visualization presents the insights.

What is the main purpose of data analysis?

In business, the main purpose of data analysis is to uncover patterns, trends, and anomalies, and then use that information to make decisions, solve problems, and reach your business goals.

Related reading: 

This article was originally published in October 2022 and has since been updated with contributions from Cecilia Gillen. The most recent update was in September 2023.

Get productivity tips delivered straight to your inbox

We’ll email you 1-3 times per week—and never share your information.

Shea Stevens picture

Shea Stevens

Shea is a content writer currently living in Charlotte, North Carolina. After graduating with a degree in Marketing from East Carolina University, she joined the digital marketing industry focusing on content and social media. In her free time, you can find Shea visiting her local farmers market, attending a country music concert, or planning her next adventure.

  • Data & analytics
  • Small business

What is data extraction? And how to automate the process

Data extraction is the process of taking actionable information from larger, less structured sources to be further refined or analyzed. Here's how to do it.

Related articles

Hero image of a Colin Gray recording a podcast

What podcasting taught me about how to run a successful business

What podcasting taught me about how to run a...

A hero image with an icon of a line graph / chart

17 key SaaS metrics your company should track (plus a cheat sheet)

17 key SaaS metrics your company should...

Hero image with an icon representing an AI agent

Enterprise AI: How companies can use AI across the organization

Enterprise AI: How companies can use AI...

Hero image of a woman doing a makeup tutorial to a camera

How to start a successful side hustle

Improve your productivity automatically. Use Zapier to get your apps working together.

A Zap with the trigger 'When I get a new lead from Facebook,' and the action 'Notify my team in Slack'

data analysis procedure in research example

What is Data Analysis with Examples

data analysis procedure in research example

In today's digital age, there is plenty of raw data being generated every second from various sources such as social media, websites, sales transactions, etc. This massive amount of data can be overwhelming without proper management. That's where data analysis comes into play.

In this article, we will uncover the definition of data analysis, its benefits, the complete data analysis process, some real-world examples, and top data analysis tools that will help you get familiar with the field.

What is Data Analysis and Its Benefits?

What Is Data Analysis with examples

image source

Data analysis is examining, cleansing, transforming, and modeling data to discover useful information, inform conclusions, and support the decision-making process. It is a crucial aspect of any business as it helps in data-driven decisions based on facts and statistics rather than gut instincts. 

Some of the key benefits of performing data analysis are:

  • It helps organizations make better decisions by providing them with accurate and up-to-date information. 
  • Businesses can identify areas where they can streamline processes and reduce large tasks. 
  • It allows companies to understand their customer's needs and preferences, leading to more targeted marketing campaigns and improved customer satisfaction. 
  • Businesses can stay ahead of their competition by identifying trends and opportunities in the market. 
  • Organizations can identify cost-saving opportunities and eliminate unnecessary expenses. 
  • By analyzing customer feedback and market trends , companies can create products that better meet consumer demand. 
  • By analyzing historical data, businesses can better predict future trends and plan accordingly. 
  • With data analysis tools, organizations can monitor key performance indicators in real time and quickly respond to changes in the market. 
  • Data analysis allows businesses to compare their performance against industry benchmarks or competitors for continuous improvement. 

Data Analysis Process

What is Data Analysis Process?

There are five major steps in the data analysis process. Let us closely examine these steps one by one.

Step - 1: Gathering Requirements

requirement gathering

The first step in data analysis involves defining the problem statement that needs to be addressed.  

Before diving into the data analysis process, it is important to clearly understand what you are trying to achieve and what specific questions you need answers to. This step sets the foundation for further analysis.

To define your problem statement you should consider the following questions:  

1. What is the goal of your analysis?

2. what specific metrics or key performance indicators (kpis) will help you measure success, 3. what data sources do you have access to  .

Once you have a clear understanding of your problem statement, you can start gathering and preparing the necessary data for analysis.

Step 2: Data Collection

Data collection

Collecting data is the foundation of any successful data analysis project. Without accurate and relevant data, the analysis will be flawed and conclusions drawn from it may be misleading. There are several key steps involved in collecting data for analysis:

1. Identify the sources of data:

Once you have defined your research question, the next step is to identify the sources of data that will help you answer that question. Data can come from a variety of sources such as surveys, interviews, observations, existing databases, or online sources. 

2. Determine the method of data collection:

Depending on the nature of your research question and available data sources you will need to determine the best method for collecting the data. This could involve conducting surveys, interviews, experiments, or using automated tools for web scraping or data extraction. 

3. Develop a data collection plan:

A well-thought-out data collection plan should include details such as who will collect the data, when and where it will be collected, how it will be recorded and stored, and any ethical considerations.

4. Collect the data:

Once your plan is in place, it's time to start collecting the data. Be sure to follow your plan closely and record all information accurately. It's also important to keep track of any potential biases or errors that may arise during the collection process. 

Step 3: Data Cleaning

Data Cleaning

Cleaning raw data involves identifying and correcting any inaccuracies , missing values, duplicates, or outliers in the dataset. This ensures that the analysis results are accurate and reliable. Here are some key steps involved in cleaning data: 

1. Removing duplicates:

Duplicates can skew analysis results by increasing certain values or giving undue importance to certain observations. It is important to identify and remove duplicate entries from the dataset to avoid such biases. 

2. Handling missing values:

Missing values can also affect the accuracy of analysis results. There are several ways to handle missing values, including imputation or removing rows with missing values altogether. 

3. Standardizing data formats:

Data may be stored in different formats across different sources within a dataset. Standardizing these formats makes it easier to compare and analyze the data effectively.  

4. Correcting errors:

Errors in data entry or recording can lead to inaccuracies in analysis results. It is important to identify and correct any errors in the dataset before proceeding with the analysis. 

5. Removing outliers:

Outliers are extreme values that lie far outside the normal range of values in a dataset. While outliers may sometimes be valid observations, they can also skew analysis results significantly. It is important to identify and remove outliers appropriately. 

6. Ensuring consistency:

Consistency in naming conventions, units of measurement, and other variables is crucial for accurate data analysis. Inconsistencies can lead to confusion and errors in interpretation. 

 Step 4: Analyze data

Data analysis methods

Data analysis is a crucial step in any research or business project. It helps to make sense of the raw data collected and draw meaningful insights from it. The fourth step in data analysis involves analyzing the data to uncover patterns, trends, and relationships within the dataset.  

Several techniques can be used for analyzing data, depending on the type of data and the research question being addressed. Some common methods are mentioned below:

1. Descriptive Analysis

It is used to summarize and describe the main features of a dataset. This may include calculating measures such as mean, median, mode, standard deviation, and range. These statistics provide a basic understanding of the distribution of the data and help to identify any outliers or anomalies. 

2. Inferential Analysis

It is used to make predictions about a population based on sample data. This includes hypothesis testing and confidence intervals. Inferential statistics allow researchers to draw conclusions about relationships between variables and make informed decisions based on the data. 

3. Regression analysis

It is used to model the relationship between one or more independent variables and a dependent variable. This technique is particularly useful for predicting outcomes based on input variables. Regression analysis can also be used to identify important predictors within a dataset. 

4. Cluster analysis

It is used to group similar objects based on their characteristics. This technique is commonly used in market segmentation or customer profiling. Cluster analysis helps to identify patterns within the data that may not be immediately apparent. 

5. Factor analysis

It is used to reduce the dimensionality of a dataset by identifying underlying factors or latent variables that explain the observed correlations between variables. This technique can help to simplify complex datasets and identify key drivers of variation. 

Step 5: Interpreting Data

Data Interpretation through charts

Interpreting data involves making sense of the results obtained from the analysis process. This step requires careful consideration of the findings and determining what they mean about the research question.

There are several key aspects to consider when interpreting data: 

1. Identifying patterns and trends:

Once you have a good grasp of the context, it’s time to look for patterns and trends in the data . This could involve identifying correlations between variables, spotting outliers, or noticing recurring themes in qualitative data. 

2. Comparing results:

It can be helpful to compare your results with existing benchmarks to see how your findings stack up against industry standards. This can provide additional context and validation for your interpretations. 

3. Concluding:

Based on your analysis and observations, conclude what the data is telling you. Be sure to support your conclusions with evidence from the data. 

4. Communicating insights:

It’s important to effectively communicate your interpretations and insights to others. This could involve creating visualizations such as charts or graphs to illustrate key points , writing a report summarizing your findings, or presenting your results to stakeholders clearly and concisely. 

Types of Data Analysis

Types of Data Analysis

There are different types of data analysis techniques including

  • Descriptive analysis focuses on summarizing the main characteristics of a dataset.
  • Diagnostic analysis aims to identify the causes of certain outcomes or events.
  • Predictive analysis uses historical data to predict future trends or behaviors.
  • Prescriptive analysis provides recommendations on how to achieve the desired outcome.

To get detailed information about types of data analysis, check out this article .  

Real-world Examples of Data Analysis Process

1. amazon case study.

One of the key uses of data analysis is in marketing. Companies can leverage customer data to understand consumer behavior, preferences, and buying patterns. For example, e-commerce giant Amazon uses data analysis to recommend products to customers based on their browsing history and purchase behavior. This personalized approach not only enhances the shopping experience for customers but also increases sales for the company. 

2. IBM’s Watson Health Platform Case Study

Another real-life example of data analysis in action is in healthcare. Hospitals and healthcare providers utilize patient data to identify trends, predict disease outbreaks, and improve patient outcomes. For instance, IBM’s Watson Health platform analyzes medical records and clinical trials to provide doctors with treatment recommendations and assist in diagnosing illnesses more accurately. 

3. Finance Sector Case Study

In the finance sector, banks use data analysis to detect fraudulent activities and assess credit risk. By analyzing transactional data in real-time, financial institutions can identify suspicious patterns and prevent potential fraud before it occurs.

Tools for Data Analysis Process

Data analysis tools allow users to collect, clean, analyze, and visualize data to gain valuable insights that can drive business growth and success. 

There are many different types of data analysis tools available on the market today , each with its unique features and capabilities. Some of the most popular data analysis tools include: 

1. Sprinkle Data

Self-serve analytics built for cloud data warehouse

Sprinkle Data is a self-service business intelligence tool with advanced analytics capabilities specifically built for cloud data warehouses. With Sprinkle Data users can consolidate data from various sources transform it and use it to create reports with drag and drop option.

Click here to get started with the platform.

2. Microsoft Excel:

Microsoft Excel

Excel is a widely used spreadsheet program that offers powerful data analysis capabilities, including pivot tables, charts, and formulas. It is a versatile tool that can be used for basic data analysis as well as more complex tasks. 

3. Tableau:

Tableau

Tableau is a data visualization tool that allows users to create interactive dashboards and reports using a drag-and-drop interface. It is known for its user-friendly design and ability to quickly generate insightful visualizations from large datasets. 

Python

Python is a programming language that is commonly used for data analysis and machine learning tasks. With libraries such as Pandas and NumPy, Python provides powerful tools for cleaning, analyzing, and manipulating large datasets. 

R is another programming language commonly used for statistical analysis and data visualization. It offers a wide range of packages for conducting advanced analyses such as regression modeling, time series forecasting, and clustering. 

6. Google Analytics:

Google analytics

Google Analytics is a web analytics tool that allows users to track website traffic and user behavior. It provides valuable insights into how users interact with websites and helps businesses optimize their online presence. 

Frequently Asked Questions FAQs - What is data analysis with examples

What are the 5 processes of data analysis?  

The 5 processes of data analysis are data collection, data cleaning, data exploration, data analysis, and data interpretation.

What are the 7 steps of data analysis? 

The 7 steps of data analysis include defining the problem, collecting relevant data, cleaning and organizing the data, exploring the data, analyzing the data using statistical methods, interpreting the results to conclude, and communicating findings through reports or presentations. 

What are the 5 examples of data? 

Five examples of types of data are numerical (quantitative), categorical (qualitative), ordinal (ordered categories), time-series (collected over time), and spatial (geographic coordinates). 

What is data analysis with real-life examples? 

Data analysis with real-life examples could include tracking student performance over time to identify factors that impact academic success or analyzing patient health records to improve medical treatments based on outcomes. 

What is data with an example? 

Data is information that is collected or stored for reference or analysis. An example of this could be a spreadsheet containing sales figures for a company's products over a certain period. 

What is a data analytics use case? 

An example use case of data analytics could be predicting customer churn for a telecommunications company by analyzing historical customer behavior and identifying factors that contribute to customers leaving their service. 

What are some types of data analysis? 

The four types of data analysis are diagnostic (identifying reasons behind events), exploratory (finding relationships between variables), confirmatory (confirming hypotheses with new datasets), quantitative data analysis, and explanatory (explaining why events occurred). 

What is the role of a data analyst? 

The role of a data analyst is to collect, clean, analyze, interpret, and visualize large amounts of complex information to help organizations make informed decisions based on evidence rather than intuition. 

What are the data analysis methods? 

To conduct effective data analysis one must first define objectives clearly, gather relevant datasets from credible sources, clean and prepare the datasets appropriately by removing errors or duplicates, analyze them using appropriate statistical methods or tools, interpret results accurately draw meaningful insights, and finally communicate findings effectively through visualizations or reports.

Related Posts

Data warehouse as a service (dwaas): transforming analytics with the cloud, top 30 data analytics tools for 2024, zoho analytics vs superset vs sprinkle data: which offers the most value, bigtable vs. bigquery: a comprehensive comparison for data management and analytics.

data analysis procedure in research example

Create Your Free Account

Ingest, transform and analyze data without writing a single line of code.

data analysis procedure in research example

Join our Community

Get help, network with fellow data engineers, and access product updates..

data analysis procedure in research example

Get started now.

Got a question? Reach out to us!

data analysis procedure in research example

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Malays Fam Physician
  • v.3(1); 2008

Data Analysis in Qualitative Research: A Brief Guide to Using Nvivo

MSc, PhD, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia

Qualitative data is often subjective, rich, and consists of in-depth information normally presented in the form of words. Analysing qualitative data entails reading a large amount of transcripts looking for similarities or differences, and subsequently finding themes and developing categories. Traditionally, researchers ‘cut and paste’ and use coloured pens to categorise data. Recently, the use of software specifically designed for qualitative data management greatly reduces technical sophistication and eases the laborious task, thus making the process relatively easier. A number of computer software packages has been developed to mechanise this ‘coding’ process as well as to search and retrieve data. This paper illustrates the ways in which NVivo can be used in the qualitative data analysis process. The basic features and primary tools of NVivo which assist qualitative researchers in managing and analysing their data are described.

QUALITATIVE RESEARCH IN MEDICINE

Qualitative research has seen an increased popularity in the last two decades and is becoming widely accepted across a wide range of medical and health disciplines, including health services research, health technology assessment, nursing, and allied health. 1 There has also been a corresponding rise in the reporting of qualitative research studies in medical and health related journals. 2

The increasing popularity of qualitative methods is a result of failure of quantitative methods to provide insight into in-depth information about the attitudes, beliefs, motives, or behaviours of people, for example in understanding the emotions, perceptions and actions of people who suffer from a medical condition. Qualitative methods explore the perspective and meaning of experiences, seek insight and identify the social structures or processes that explain people”s behavioural meaning. 1 , 3 Most importantly, qualitative research relies on extensive interaction with the people being studied, and often allows researchers to uncover unexpected or unanticipated information, which is not possible in the quantitative methods. In medical research, it is particularly useful, for example, in a health behaviour study whereby health or education policies can be effectively developed if reasons for behaviours are clearly understood when observed or investigated using qualitative methods. 4

ANALYSING QUALITATIVE DATA

Qualitative research yields mainly unstructured text-based data. These textual data could be interview transcripts, observation notes, diary entries, or medical and nursing records. In some cases, qualitative data can also include pictorial display, audio or video clips (e.g. audio and visual recordings of patients, radiology film, and surgery videos), or other multimedia materials. Data analysis is the part of qualitative research that most distinctively differentiates from quantitative research methods. It is not a technical exercise as in quantitative methods, but more of a dynamic, intuitive and creative process of inductive reasoning, thinking and theorising. 5 In contrast to quantitative research, which uses statistical methods, qualitative research focuses on the exploration of values, meanings, beliefs, thoughts, experiences, and feelings characteristic of the phenomenon under investigation. 6

Data analysis in qualitative research is defined as the process of systematically searching and arranging the interview transcripts, observation notes, or other non-textual materials that the researcher accumulates to increase the understanding of the phenomenon. 7 The process of analysing qualitative data predominantly involves coding or categorising the data. Basically it involves making sense of huge amounts of data by reducing the volume of raw information, followed by identifying significant patterns, and finally drawing meaning from data and subsequently building a logical chain of evidence. 8

Coding or categorising the data is the most important stage in the qualitative data analysis process. Coding and data analysis are not synonymous, though coding is a crucial aspect of the qualitative data analysis process. Coding merely involves subdividing the huge amount of raw information or data, and subsequently assigning them into categories. 9 In simple terms, codes are tags or labels for allocating identified themes or topics from the data compiled in the study. Traditionally, coding was done manually, with the use of coloured pens to categorise data, and subsequently cutting and sorting the data. Given the advancement of software technology, electronic methods of coding data are increasingly used by qualitative researchers.

Nevertheless, the computer does not do the analysis for the researchers. Users still have to create the categories, code, decide what to collate, identify the patterns and draw meaning from the data. The use of computer software in qualitative data analysis is limited due to the nature of qualitative research itself in terms of the complexity of its unstructured data, the richness of the data and the way in which findings and theories emerge from the data. 10 The programme merely takes over the marking, cutting, and sorting tasks that qualitative researchers used to do with a pair of scissors, paper and note cards. It helps to maximise efficiency and speed up the process of grouping data according to categories and retrieving coded themes. Ultimately, the researcher still has to synthesise the data and interpret the meanings that were extracted from the data. Therefore, the use of computers in qualitative analysis merely made organisation, reduction and storage of data more efficient and manageable. The qualitative data analysis process is illustrated in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is MFP-03-14-g001.jpg

Qualitative data analysis flowchart

USING NVIVO IN QUALITATIVE DATA ANALYSIS

NVivo is one of the computer-assisted qualitative data analysis softwares (CAQDAS) developed by QSR International (Melbourne, Australia), the world’s largest qualitative research software developer. This software allows for qualitative inquiry beyond coding, sorting and retrieval of data. It was also designed to integrate coding with qualitative linking, shaping and modelling. The following sections discuss the fundamentals of the NVivo software (version 2.0) and illustrates the primary tools in NVivo which assist qualitative researchers in managing their data.

Key features of NVivo

To work with NVivo, first and foremost, the researcher has to create a Project to hold the data or study information. Once a project is created, the Project pad appears ( Figure 2 ). The project pad of NVivo has two main menus: Document browser and Node browser . In any project in NVivo, the researcher can create and explore documents and nodes, when the data is browsed, linked and coded. Both document and node browsers have an Attribute feature, which helps researchers to refer the characteristics of the data such as age, gender, marital status, ethnicity, etc.

An external file that holds a picture, illustration, etc.
Object name is MFP-03-14-g002.jpg

Project pad with documents tab selected

The document browser is the main work space for coding documents ( Figure 3 ). Documents in NVivo can be created inside the NVivo project or imported from MS Word or WordPad in a rich text (.rtf) format into the project. It can also be imported as a plain text file (.txt) from any word processor. Transcripts of interview data and observation notes are examples of documents that can be saved as individual documents in NVivo. In the document browser all the documents can be viewed in a database with short descriptions of each document.

An external file that holds a picture, illustration, etc.
Object name is MFP-03-14-g003.jpg

Document browser with coder and coding stripe activated

NVivo is also designed to allow the researcher to place a Hyperlink to other files (for example audio, video and image files, web pages, etc.) in the documents to capture conceptual links which are observed during the analysis. The readers can click on it and be taken to another part of the same document, or a separate file. A hyperlink is very much like a footnote.

The second menu is Node explorer ( Figure 4 ), which represents categories throughout the data. The codes are saved within the NVivo database as nodes. Nodes created in NVivo are equivalent to sticky notes that the researcher places on the document to indicate that a particular passage belongs to a certain theme or topic. Unlike sticky notes, the nodes in NVivo are retrievable, easily organised, and give flexibility to the researcher to either create, delete, alter or merge at any stage. There are two most common types of node: tree nodes (codes that are organised in a hierarchical structure) and free nodes (free standing and not associated with a structured framework of themes or concepts). Once the coding process is complete, the researcher can browse the nodes. To view all the quotes on a particular Node, select the particular node on the Node Explorer and click the Browse button ( Figure 5 ).

An external file that holds a picture, illustration, etc.
Object name is MFP-03-14-g004.jpg

Node explorer with a tree node highlighted

An external file that holds a picture, illustration, etc.
Object name is MFP-03-14-g005.jpg

Browsing a node

Coding in NVivo using Coder

Coding is done in the document browser. Coding involves the desegregation of textual data into segments, examining the data similarities and differences, and grouping together conceptually similar data in the respective nodes. 11 The organised list of nodes will appear with a click on the Coder button at the bottom of document browser window.

To code a segment of the text in a project document under a particular node, highlight the particular segment and drag the highlighted text to the desired node in the coder window ( Figure 3 ). The segments that have been coded to a particular node are highlighted in colours and nodes that have attached to a document turns bold. Multiple codes can be assigned to the same segment of text using the same process. Coding Stripes can be activated to view the quotes that are associated with the particular nodes. With the guide of highlighted text and coding stripes, the researcher can return to the data to do further coding or refine the coding.

Coding can be done with pre-constructed coding schemes where the nodes are first created using the Node explorer followed by coding using the coder. Alternatively, a bottom-up approach can be used where the researcher reads the documents and creates nodes when themes arise from the data as he or she codes.

Making and using memos

In analysing qualitative data, pieces of reflective thinking, ideas, theories, and concepts often emerge as the researcher reads through the data. NVivo allows the user the flexibility to record ideas about the research as they emerge in the Memos . Memos can be seen as add-on documents, treated as full status data and coded like any other documents. 12 Memos can be placed in a document or at a node. A memo itself can have memos (e.g. documents or nodes) linked to it, using DocLinks and NodeLinks .

Creating attributes

Attributes are characteristics (e.g. age, marital status, ethnicity, educational level, etc.) that the researcher associates with a document or node. Attributes have different values (for example, the values of the attribute for ethnicity are ‘Malay’, ‘Chinese’ and ‘Indian’). NVivo makes it possible to assign attributes to either document or node. Items in attributes can be added, removed or rearranged to help the researcher in making comparisons. Attributes are also integrated with the searching process; for example, linking the attributes to documents will enable the researcher to conduct searches pertaining to documents with specified characteristics ( Figure 6 ).

An external file that holds a picture, illustration, etc.
Object name is MFP-03-14-g006.jpg

Document attribute explorer

Search operation

The three most useful types of searches in NVivo are Single item (text, node, or attribute value), Boolean and Proximity searches. Single item search is particularly important, for example, if researchers want to ensure that every mention of the word ‘cure’ has been coded under the ‘Curability of cervical cancer’ tree node. Every paragraph in which this word is used can be viewed. The results of the search can also be compiled into a single document in the node browser and by viewing the coding stripe. The researcher can check whether each of the resulting passages has been coded under a particular node. This is particularly useful for the researcher to further determine whether conducting further coding is necessary.

Boolean searches combine codes using the logical terms like ‘and’, ‘or’ and ‘not’. Common Boolean searches are ‘or’ (also referred to as ‘combination’ or ‘union’) and ‘and’ (also called ‘intersection’). For example, the researcher may wish to search for a node and an attributed value, such as ‘ever screened for cervical cancer’ and ‘primary educated’. Search results can be displayed in matrix form and it is possible for the researcher to perform quantitative interpretations or simple counts to provide useful summaries of some aspects of the analysis. 13 Proximity searches are used to find places where two items (e.g. text patterns, attribute values, nodes) appear near each other in the text.

Using models to show relationships

Models or visualisations are an essential way to describe and explore relationships in qualitative research. NVivo provides a Modeler designated for visual exploration and explanation of relationships between various nodes and documents. In Model Explorer, the researcher can create, label and connect ideas or concepts. NVivo allows the user to create a model over time and have any number of layers to track the progress of theory development to enable the researcher to examine the stages in the model-building over time ( Figure 7 ). Any documents, nodes or attributes can be placed in a model and clicking on the item will enable the researcher to inspect its properties.

An external file that holds a picture, illustration, etc.
Object name is MFP-03-14-g007.jpg

Model explorer showing the perceived risk factors of cervical cancer

NVivo has clear advantages and can greatly enhance research quality as outlined above. It can ease the laborious task of data analysis which would otherwise be performed manually. The software certainly removes the tremendous amount of manual tasks and allows more time for the researcher to explore trends, identify themes, and make conclusions. Ultimately, analysis of qualitative data is now more systematic and much easier. In addition, NVivo is ideal for researchers working in a team as the software has a Merge tool that enables researchers that work in separate teams to bring their work together into one project.

The NVivo software has been revolutionised and enhanced recently. The newly released NVivo 7 (released March 2006) and NVivo 8 (released March 2008) are even more sophisticated, flexible, and enable more fluid analysis. These new softwares come with a more user-friendly interface that resembles the Microsoft Windows XP applications. Furthermore, they have new data handling capacities such as to enable tables or images embedded in rich text files to be imported and coded as well. In addition, the user can also import and work on rich text files in character based languages such as Chinese or Arabic.

To sum up, qualitative research undoubtedly has been advanced greatly by the development of CAQDAS. The use of qualitative methods in medical and health care research is postulated to grow exponentially in years to come with the further development of CAQDAS.

More information about the NVivo software

Detailed information about NVivo’s functionality is available at http://www.qsrinternational.com . The website also carries information about the latest versions of NVivo. Free demonstrations and tutorials are available for download.

ACKNOWLEDGEMENT

The examples in this paper were adapted from the data of the study funded by the Ministry of Science, Technology and Environment, Malaysia under the Intensification of Research in Priority Areas (IRPA) 06-02-1032 PR0024/09-06.

TERMINOLOGY

Attributes : An attribute is a property of a node, case or document. It is equivalent to a variable in quantitative analysis. An attribute (e.g. ethnicity) may have several values (e.g. Malay, Chinese, Indian, etc.). Any particular node, case or document may be assigned one value for each attribute. Similarities within or differences between groups can be identified using attributes. Attribute Explorer displays a table of all attributes assigned to a document, node or set.

CAQDAS : Computer Aided Qualitative Data Analysis. The CAQDAS programme assists data management and supports coding processes. The software does not really analyse data, but rather supports the qualitative analysis process. NVivo is one of the CAQDAS programmes; others include NUDIST, ATLAS-ti, AQUAD, ETHNOGRAPH and MAXQDA.

Code : A term that represents an idea, theme, theory, dimension, characteristic, etc., of the data.

Coder : A tool used to code a passage of text in a document under a particular node. The coder can be accessed from the Document or Node Browser .

Coding : The action of identifying a passage of text in a document that exemplifies ideas or concepts and connecting it to a node that represents that idea or concept. Multiple codes can be assigned to the same segment of text in a document.

Coding stripes : Coloured vertical lines displayed at the right-hand pane of a Document ; each is named with title of the node at which the text is coded.

DataLinks : A tool for linking the information in a document or node to the information outside the project, or between project documents. DocLinks , NodeLinks and DataBite Links are all forms of DataLink .

Document : A document in an NVivo project is an editable rich text or plain text file. It may be a transcription of project data or it may be a summary of such data or memos, notes or passages written by the researcher. The text in a document can be coded, may be given values of document attributes and may be linked (via DataLinks ) to other related documents, annotations, or external computer files. The Document Explorer shows the list of all project documents.

Memo : A document containing the researcher”s commentary flagged (linked) on any text in a Document or Node. Any files (text, audio or video, or picture data) can be linked via MemoLink .

Model : NVivo models are made up of symbols, usually representing items in the project, which are joined by lines or arrows, designed to represent the relationship between key elements in a field of study. Models are constructed in the Modeller .

Node : Relevant passages in the project”s documents are coded at nodes. A Node represents a code, theme, or idea about the data in a project. Nodes can be kept as Free Nodes (without organisation) or may be organised hierarchically in Trees (of categories and subcategories). Free nodes are free-standing and are not associated to themes or concepts. Early on in the project, tentative ideas may be stored in the Free Nodes area. Free nodes can be kept in a simple list and can be moved to a logical place in the Tree Node when higher levels of categories are discovered. Nodes can be given values of attributes according to the features of what they represent, and can be grouped in sets. Nodes can be organised (created, edited) in Node Explorer (a window listing all the project nodes and node sets). The Node Browser displays the node”s coding and allow the researcher to change the coding.

Project : Collection of all the files, documents, codes, nodes, attributes, etc. associated with a research project. The Project pad is a window in NVivo when a project is open which gives access to all the main functions of the programme.

Sets : Sets in NVivo hold shortcuts to any nodes or documents, as a way of holding those items together without actually combining them. Sets are used primarily as a way of indicating items that in some way are related conceptually or theoretically. It provides different ways of sorting and managing data.

Tree Node : Nodes organised hierarchically into trees to catalogue categories and subcategories.

Qualitative vs Quantitative Research Methods & Data Analysis

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

What is the difference between quantitative and qualitative?

The main difference between quantitative and qualitative research is the type of data they collect and analyze.

Quantitative research collects numerical data and analyzes it using statistical methods. The aim is to produce objective, empirical data that can be measured and expressed in numerical terms. Quantitative research is often used to test hypotheses, identify patterns, and make predictions.

Qualitative research , on the other hand, collects non-numerical data such as words, images, and sounds. The focus is on exploring subjective experiences, opinions, and attitudes, often through observation and interviews.

Qualitative research aims to produce rich and detailed descriptions of the phenomenon being studied, and to uncover new insights and meanings.

Quantitative data is information about quantities, and therefore numbers, and qualitative data is descriptive, and regards phenomenon which can be observed but not measured, such as language.

What Is Qualitative Research?

Qualitative research is the process of collecting, analyzing, and interpreting non-numerical data, such as language. Qualitative research can be used to understand how an individual subjectively perceives and gives meaning to their social reality.

Qualitative data is non-numerical data, such as text, video, photographs, or audio recordings. This type of data can be collected using diary accounts or in-depth interviews and analyzed using grounded theory or thematic analysis.

Qualitative research is multimethod in focus, involving an interpretive, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Denzin and Lincoln (1994, p. 2)

Interest in qualitative data came about as the result of the dissatisfaction of some psychologists (e.g., Carl Rogers) with the scientific study of psychologists such as behaviorists (e.g., Skinner ).

Since psychologists study people, the traditional approach to science is not seen as an appropriate way of carrying out research since it fails to capture the totality of human experience and the essence of being human.  Exploring participants’ experiences is known as a phenomenological approach (re: Humanism ).

Qualitative research is primarily concerned with meaning, subjectivity, and lived experience. The goal is to understand the quality and texture of people’s experiences, how they make sense of them, and the implications for their lives.

Qualitative research aims to understand the social reality of individuals, groups, and cultures as nearly as possible as participants feel or live it. Thus, people and groups are studied in their natural setting.

Some examples of qualitative research questions are provided, such as what an experience feels like, how people talk about something, how they make sense of an experience, and how events unfold for people.

Research following a qualitative approach is exploratory and seeks to explain ‘how’ and ‘why’ a particular phenomenon, or behavior, operates as it does in a particular context. It can be used to generate hypotheses and theories from the data.

Qualitative Methods

There are different types of qualitative research methods, including diary accounts, in-depth interviews , documents, focus groups , case study research , and ethnography.

The results of qualitative methods provide a deep understanding of how people perceive their social realities and in consequence, how they act within the social world.

The researcher has several methods for collecting empirical materials, ranging from the interview to direct observation, to the analysis of artifacts, documents, and cultural records, to the use of visual materials or personal experience. Denzin and Lincoln (1994, p. 14)

Here are some examples of qualitative data:

Interview transcripts : Verbatim records of what participants said during an interview or focus group. They allow researchers to identify common themes and patterns, and draw conclusions based on the data. Interview transcripts can also be useful in providing direct quotes and examples to support research findings.

Observations : The researcher typically takes detailed notes on what they observe, including any contextual information, nonverbal cues, or other relevant details. The resulting observational data can be analyzed to gain insights into social phenomena, such as human behavior, social interactions, and cultural practices.

Unstructured interviews : generate qualitative data through the use of open questions.  This allows the respondent to talk in some depth, choosing their own words.  This helps the researcher develop a real sense of a person’s understanding of a situation.

Diaries or journals : Written accounts of personal experiences or reflections.

Notice that qualitative data could be much more than just words or text. Photographs, videos, sound recordings, and so on, can be considered qualitative data. Visual data can be used to understand behaviors, environments, and social interactions.

Qualitative Data Analysis

Qualitative research is endlessly creative and interpretive. The researcher does not just leave the field with mountains of empirical data and then easily write up his or her findings.

Qualitative interpretations are constructed, and various techniques can be used to make sense of the data, such as content analysis, grounded theory (Glaser & Strauss, 1967), thematic analysis (Braun & Clarke, 2006), or discourse analysis.

For example, thematic analysis is a qualitative approach that involves identifying implicit or explicit ideas within the data. Themes will often emerge once the data has been coded .

RESEARCH THEMATICANALYSISMETHOD

Key Features

  • Events can be understood adequately only if they are seen in context. Therefore, a qualitative researcher immerses her/himself in the field, in natural surroundings. The contexts of inquiry are not contrived; they are natural. Nothing is predefined or taken for granted.
  • Qualitative researchers want those who are studied to speak for themselves, to provide their perspectives in words and other actions. Therefore, qualitative research is an interactive process in which the persons studied teach the researcher about their lives.
  • The qualitative researcher is an integral part of the data; without the active participation of the researcher, no data exists.
  • The study’s design evolves during the research and can be adjusted or changed as it progresses. For the qualitative researcher, there is no single reality. It is subjective and exists only in reference to the observer.
  • The theory is data-driven and emerges as part of the research process, evolving from the data as they are collected.

Limitations of Qualitative Research

  • Because of the time and costs involved, qualitative designs do not generally draw samples from large-scale data sets.
  • The problem of adequate validity or reliability is a major criticism. Because of the subjective nature of qualitative data and its origin in single contexts, it is difficult to apply conventional standards of reliability and validity. For example, because of the central role played by the researcher in the generation of data, it is not possible to replicate qualitative studies.
  • Also, contexts, situations, events, conditions, and interactions cannot be replicated to any extent, nor can generalizations be made to a wider context than the one studied with confidence.
  • The time required for data collection, analysis, and interpretation is lengthy. Analysis of qualitative data is difficult, and expert knowledge of an area is necessary to interpret qualitative data. Great care must be taken when doing so, for example, looking for mental illness symptoms.

Advantages of Qualitative Research

  • Because of close researcher involvement, the researcher gains an insider’s view of the field. This allows the researcher to find issues that are often missed (such as subtleties and complexities) by the scientific, more positivistic inquiries.
  • Qualitative descriptions can be important in suggesting possible relationships, causes, effects, and dynamic processes.
  • Qualitative analysis allows for ambiguities/contradictions in the data, which reflect social reality (Denscombe, 2010).
  • Qualitative research uses a descriptive, narrative style; this research might be of particular benefit to the practitioner as she or he could turn to qualitative reports to examine forms of knowledge that might otherwise be unavailable, thereby gaining new insight.

What Is Quantitative Research?

Quantitative research involves the process of objectively collecting and analyzing numerical data to describe, predict, or control variables of interest.

The goals of quantitative research are to test causal relationships between variables , make predictions, and generalize results to wider populations.

Quantitative researchers aim to establish general laws of behavior and phenomenon across different settings/contexts. Research is used to test a theory and ultimately support or reject it.

Quantitative Methods

Experiments typically yield quantitative data, as they are concerned with measuring things.  However, other research methods, such as controlled observations and questionnaires , can produce both quantitative information.

For example, a rating scale or closed questions on a questionnaire would generate quantitative data as these produce either numerical data or data that can be put into categories (e.g., “yes,” “no” answers).

Experimental methods limit how research participants react to and express appropriate social behavior.

Findings are, therefore, likely to be context-bound and simply a reflection of the assumptions that the researcher brings to the investigation.

There are numerous examples of quantitative data in psychological research, including mental health. Here are a few examples:

Another example is the Experience in Close Relationships Scale (ECR), a self-report questionnaire widely used to assess adult attachment styles .

The ECR provides quantitative data that can be used to assess attachment styles and predict relationship outcomes.

Neuroimaging data : Neuroimaging techniques, such as MRI and fMRI, provide quantitative data on brain structure and function.

This data can be analyzed to identify brain regions involved in specific mental processes or disorders.

For example, the Beck Depression Inventory (BDI) is a clinician-administered questionnaire widely used to assess the severity of depressive symptoms in individuals.

The BDI consists of 21 questions, each scored on a scale of 0 to 3, with higher scores indicating more severe depressive symptoms. 

Quantitative Data Analysis

Statistics help us turn quantitative data into useful information to help with decision-making. We can use statistics to summarize our data, describing patterns, relationships, and connections. Statistics can be descriptive or inferential.

Descriptive statistics help us to summarize our data. In contrast, inferential statistics are used to identify statistically significant differences between groups of data (such as intervention and control groups in a randomized control study).

  • Quantitative researchers try to control extraneous variables by conducting their studies in the lab.
  • The research aims for objectivity (i.e., without bias) and is separated from the data.
  • The design of the study is determined before it begins.
  • For the quantitative researcher, the reality is objective, exists separately from the researcher, and can be seen by anyone.
  • Research is used to test a theory and ultimately support or reject it.

Limitations of Quantitative Research

  • Context: Quantitative experiments do not take place in natural settings. In addition, they do not allow participants to explain their choices or the meaning of the questions they may have for those participants (Carr, 1994).
  • Researcher expertise: Poor knowledge of the application of statistical analysis may negatively affect analysis and subsequent interpretation (Black, 1999).
  • Variability of data quantity: Large sample sizes are needed for more accurate analysis. Small-scale quantitative studies may be less reliable because of the low quantity of data (Denscombe, 2010). This also affects the ability to generalize study findings to wider populations.
  • Confirmation bias: The researcher might miss observing phenomena because of focus on theory or hypothesis testing rather than on the theory of hypothesis generation.

Advantages of Quantitative Research

  • Scientific objectivity: Quantitative data can be interpreted with statistical analysis, and since statistics are based on the principles of mathematics, the quantitative approach is viewed as scientifically objective and rational (Carr, 1994; Denscombe, 2010).
  • Useful for testing and validating already constructed theories.
  • Rapid analysis: Sophisticated software removes much of the need for prolonged data analysis, especially with large volumes of data involved (Antonius, 2003).
  • Replication: Quantitative data is based on measured values and can be checked by others because numerical data is less open to ambiguities of interpretation.
  • Hypotheses can also be tested because of statistical analysis (Antonius, 2003).

Antonius, R. (2003). Interpreting quantitative data with SPSS . Sage.

Black, T. R. (1999). Doing quantitative research in the social sciences: An integrated approach to research design, measurement and statistics . Sage.

Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology . Qualitative Research in Psychology , 3, 77–101.

Carr, L. T. (1994). The strengths and weaknesses of quantitative and qualitative research : what method for nursing? Journal of advanced nursing, 20(4) , 716-721.

Denscombe, M. (2010). The Good Research Guide: for small-scale social research. McGraw Hill.

Denzin, N., & Lincoln. Y. (1994). Handbook of Qualitative Research. Thousand Oaks, CA, US: Sage Publications Inc.

Glaser, B. G., Strauss, A. L., & Strutzel, E. (1968). The discovery of grounded theory; strategies for qualitative research. Nursing research, 17(4) , 364.

Minichiello, V. (1990). In-Depth Interviewing: Researching People. Longman Cheshire.

Punch, K. (1998). Introduction to Social Research: Quantitative and Qualitative Approaches. London: Sage

Further Information

  • Designing qualitative research
  • Methods of data collection and analysis
  • Introduction to quantitative and qualitative research
  • Checklists for improving rigour in qualitative research: a case of the tail wagging the dog?
  • Qualitative research in health care: Analysing qualitative data
  • Qualitative data analysis: the framework approach
  • Using the framework method for the analysis of
  • Qualitative data in multi-disciplinary health research
  • Content Analysis
  • Grounded Theory
  • Thematic Analysis

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

Qualitative vs. quantitative data in research: what's the difference?

Qualitative vs. quantitative data in research: what's the difference?

If you're reading this, you likely already know the importance of data analysis. And you already know it can be incredibly complex.

At its simplest, research and it's data can be broken down into two different categories: quantitative and qualitative. But what's the difference between each? And when should you use them? And how can you use them together?

Understanding the differences between qualitative and quantitative data is key to any research project. Knowing both approaches can help you in understanding your data better—and ultimately understand your customers better. Quick takeaways:

Quantitative research uses objective, numerical data to answer questions like "what" and "how often." Conversely, qualitative research seeks to answer questions like "why" and "how," focusing on subjective experiences to understand motivations and reasons.

Quantitative data is collected through methods like surveys and experiments and analyzed statistically to identify patterns. Qualitative data is gathered through interviews or observations and analyzed by categorizing information to understand themes and insights.

Effective data analysis combines quantitative data for measurable insights with qualitative data for contextual depth.

What is quantitative data?

Qualitative and quantitative data differ in their approach and the type of data they collect.

Quantitative data refers to any information that can be quantified — that is, numbers. If it can be counted or measured, and given a numerical value, it's quantitative in nature. Think of it as a measuring stick.

Quantitative variables can tell you "how many," "how much," or "how often."

Some examples of quantitative data :  

How many people attended last week's webinar? 

How much revenue did our company make last year? 

How often does a customer rage click on this app?

To analyze these research questions and make sense of this quantitative data, you’d normally use a form of statistical analysis —collecting, evaluating, and presenting large amounts of data to discover patterns and trends. Quantitative data is conducive to this type of analysis because it’s numeric and easier to analyze mathematically.

Computers now rule statistical analytics, even though traditional methods have been used for years. But today’s data volumes make statistics more valuable and useful than ever. When you think of statistical analysis now, you think of powerful computers and algorithms that fuel many of the software tools you use today.

Popular quantitative data collection methods are surveys, experiments, polls, and more.

Quantitative Data 101: What is quantitative data?

Take a deeper dive into what quantitative data is, how it works, how to analyze it, collect it, use it, and more.

Learn more about quantitative data →

What is qualitative data?

Unlike quantitative data, qualitative data is descriptive, expressed in terms of language rather than numerical values.

Qualitative data analysis describes information and cannot be measured or counted. It refers to the words or labels used to describe certain characteristics or traits.

You would turn to qualitative data to answer the "why?" or "how?" questions. It is often used to investigate open-ended studies, allowing participants (or customers) to show their true feelings and actions without guidance.

Some examples of qualitative data:

Why do people prefer using one product over another?

How do customers feel about their customer service experience?

What do people think about a new feature in the app?

Think of qualitative data as the type of data you'd get if you were to ask someone why they did something. Popular data collection methods are in-depth interviews, focus groups, or observation.

Start growing with data and Fullstory.

Request your personalized demo of the Fullstory behavioral data platform.

What are the differences between qualitative vs. quantitative data?

When it comes to conducting data research, you’ll need different collection, hypotheses and analysis methods, so it’s important to understand the key differences between quantitative and qualitative data:

Quantitative data is numbers-based, countable, or measurable. Qualitative data is interpretation-based, descriptive, and relating to language.

Quantitative data tells us how many, how much, or how often in calculations. Qualitative data can help us to understand why, how, or what happened behind certain behaviors .

Quantitative data is fixed and universal. Qualitative data is subjective and unique.

Quantitative research methods are measuring and counting. Qualitative research methods are interviewing and observing.

Quantitative data is analyzed using statistical analysis. Qualitative data is analyzed by grouping the data into categories and themes.

Qualtitative vs quantitative examples

As you can see, both provide immense value for any data collection and are key to truly finding answers and patterns. 

More examples of quantitative and qualitative data

You’ve most likely run into quantitative and qualitative data today, alone. For the visual learner, here are some examples of both quantitative and qualitative data: 

Quantitative data example

The customer has clicked on the button 13 times. 

The engineer has resolved 34 support tickets today. 

The team has completed 7 upgrades this month. 

14 cartons of eggs were purchased this month.

Qualitative data example

My manager has curly brown hair and blue eyes.

My coworker is funny, loud, and a good listener. 

The customer has a very friendly face and a contagious laugh.

The eggs were delicious.

The fundamental difference is that one type of data answers primal basics and one answers descriptively. 

What does this mean for data quality and analysis? If you just analyzed quantitative data, you’d be missing core reasons behind what makes a data collection meaningful. You need both in order to truly learn from data—and truly learn from your customers. 

What are the advantages and disadvantages of each?

Both types of data has their own pros and cons. 

Advantages of quantitative data

It’s relatively quick and easy to collect and it’s easier to draw conclusions from. 

When you collect quantitative data, the type of results will tell you which statistical tests are appropriate to use. 

As a result, interpreting your data and presenting those findings is straightforward and less open to error and subjectivity.

Another advantage is that you can replicate it. Replicating a study is possible because your data collection is measurable and tangible for further applications.

Disadvantages of quantitative data

Quantitative data doesn’t always tell you the full story (no matter what the perspective). 

With choppy information, it can be inconclusive.

Quantitative research can be limited, which can lead to overlooking broader themes and relationships.

By focusing solely on numbers, there is a risk of missing larger focus information that can be beneficial.

Advantages of qualitative data

Qualitative data offers rich, in-depth insights and allows you to explore context.

It’s great for exploratory purposes.

Qualitative research delivers a predictive element for continuous data.

Disadvantages of qualitative data

It’s not a statistically representative form of data collection because it relies upon the experience of the host (who can lose data).

It can also require multiple data sessions, which can lead to misleading conclusions.

The takeaway is that it’s tough to conduct a successful data analysis without both. They both have their advantages and disadvantages and, in a way, they complement each other. 

Now, of course, in order to analyze both types of data, information has to be collected first.

Let's get into the research.

Quantitative and qualitative research

The core difference between qualitative and quantitative research lies in their focus and methods of data collection and analysis. This distinction guides researchers in choosing an appropriate approach based on their specific research needs.

Using mixed methods of both can also help provide insights form combined qualitative and quantitative data.

Best practices of each help to look at the information under a broader lens to get a unique perspective. Using both methods is helpful because they collect rich and reliable data, which can be further tested and replicated.

What is quantitative research?

Quantitative research is based on the collection and interpretation of numeric data. It's all about the numbers and focuses on measuring (using inferential statistics ) and generalizing results. Quantitative research seeks to collect numerical data that can be transformed into usable statistics.

It relies on measurable data to formulate facts and uncover patterns in research. By employing statistical methods to analyze the data, it provides a broad overview that can be generalized to larger populations.

In terms of digital experience data, it puts everything in terms of numbers (or discrete data )—like the number of users clicking a button, bounce rates , time on site, and more. 

Some examples of quantitative research: 

What is the amount of money invested into this service?

What is the average number of times a button was dead clicked ?

How many customers are actually clicking this button?

Essentially, quantitative research is an easy way to see what’s going on at a 20,000-foot view. 

Each data set (or customer action, if we’re still talking digital experience) has a numerical value associated with it and is quantifiable information that can be used for calculating statistical analysis so that decisions can be made. 

You can use statistical operations to discover feedback patterns (with any representative sample size) in the data under examination. The results can be used to make predictions , find averages, test causes and effects, and generalize results to larger measurable data pools. 

Unlike qualitative methodology, quantitative research offers more objective findings as they are based on more reliable numeric data.

Quantitative data collection methods

A survey is one of the most common research methods with quantitative data that involves questioning a large group of people. Questions are usually closed-ended and are the same for all participants. An unclear questionnaire can lead to distorted research outcomes.

Similar to surveys, polls yield quantitative data. That is, you poll a number of people and apply a numeric value to how many people responded with each answer.

Experiments

An experiment is another common method that usually involves a control group and an experimental group . The experiment is controlled and the conditions can be manipulated accordingly. You can examine any type of records involved if they pertain to the experiment, so the data is extensive. 

What is qualitative research?

Qualitative research does not simply help to collect data. It gives a chance to understand the trends and meanings of natural actions. It’s flexible and iterative.

Qualitative research focuses on the qualities of users—the actions that drive the numbers. It's descriptive research. The qualitative approach is subjective, too. 

It focuses on describing an action, rather than measuring it.

Some examples of qualitative research: 

The sunflowers had a fresh smell that filled the office.

All the bagels with bites taken out of them had cream cheese.

The man had blonde hair with a blue hat.

Qualitative research utilizes interviews, focus groups, and observations to gather in-depth insights.

This approach shines when the research objective calls for exploring ideas or uncovering deep insights rather than quantifying elements.

Qualitative data collection methods

An interview is the most common qualitative research method. This method involves personal interaction (either in real life or virtually) with a participant. It’s mostly used for exploring attitudes and opinions regarding certain issues.

Interviews are very popular methods for collecting data in product design .

Focus groups

Data analysis by focus group is another method where participants are guided by a host to collect data. Within a group (either in person or online), each member shares their opinion and experiences on a specific topic, allowing researchers to gather perspectives and deepen their understanding of the subject matter.

Digital Leadership Webinar: Accelerating Growth with Quantitative Data and Analytics

Learn how the best-of-the-best are connecting quantitative data and experience to accelerate growth.

So which type of data is better for data analysis?

So how do you determine which type is better for data analysis ?

Quantitative data is structured and accountable. This type of data is formatted in a way so it can be organized, arranged, and searchable. Think about this data as numbers and values found in spreadsheets—after all, you would trust an Excel formula.

Qualitative data is considered unstructured. This type of data is formatted (and known for) being subjective, individualized, and personalized. Anything goes. Because of this, qualitative data is inferior if it’s the only data in the study. However, it’s still valuable. 

Because quantitative data is more concrete, it’s generally preferred for data analysis. Numbers don’t lie. But for complete statistical analysis, using both qualitative and quantitative yields the best results. 

At Fullstory, we understand the importance of data, which is why we created a behavioral data platform that analyzes customer data for better insights. Our platform delivers a complete, retroactive view of how people interact with your site or app—and analyzes every point of user interaction so you can scale.

Unlock business-critical data with Fullstory

A perfect digital customer experience is often the difference between company growth and failure. And the first step toward building that experience is quantifying who your customers are, what they want, and how to provide them what they need.

Access to product analytics is the most efficient and reliable way to collect valuable quantitative data about funnel analysis, customer journey maps , user segments, and more.

But creating a perfect digital experience means you need organized and digestible quantitative data—but also access to qualitative data. Understanding the why is just as important as the what itself.

Fullstory's DXI platform combines the quantitative insights of product analytics with picture-perfect session replay for complete context that helps you answer questions, understand issues, and uncover customer opportunities.

Start a free 14-day trial to see how Fullstory can help you combine your most invaluable quantitative and qualitative insights and eliminate blind spots.

About the author

Our team of experts is committed to introducing people to important topics surrounding analytics, digital experience intelligence, product development, and more.

Related posts

Quantitative data is used for calculations or obtaining numerical results. Learn about the different types of quantitative data uses cases and more.

Discover how data discovery transforms raw data into actionable insights for informed decisions, improved strategies, and better customer experiences.

Learn the 3 key benefits democratized data can achieve, and 3 of the most pertinent dangers of keeping data (and teams) siloed.

Learn the essentials of behavioral data and its transformative impact on customer experience. Our comprehensive guide provides the tools and knowledge to harness this power effectively.

Discover how Fullstory leverages AI to turn raw data into actionable insights, transforming user experiences and driving business growth.

Discover how just-in-time data, explained by Lane Greer, enhances customer insights and decision-making beyond real-time analytics.

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence
  • Market Research
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Survey Data Analysis & Reporting
  • Survey Analysis Methods

Try Qualtrics for free

Survey statistical analysis methods.

16 min read Get more from your survey results with tried and trusted statistical tests and analysis methods. The kind of data analysis you choose depends on your survey data, so it makes sense to understand as many statistical analysis options as possible. Here’s a one-stop guide.

Why use survey statistical analysis methods?

Using statistical analysis for survey data is a best practice for businesses and market researchers. But why?

Statistical tests can help you improve your knowledge of the market, create better experiences for your customers, give employees more of what they need to do their jobs, and sell more of your products and services to the people that want them. As data becomes more available and easier to manage using digital tools, businesses are increasingly using it to make decisions, rather than relying on gut instinct or opinion.

When it comes to survey data , collection is only half the picture. What you do with your results can make the difference between uninspiring top-line findings and deep, revelatory insights. Using data processing tools and techniques like statistical tests can help you discover:

  • whether the trends you see in your data are meaningful or just happened by chance
  • what your results mean in the context of other information you have
  • whether one factor affecting your business is more important than others
  • what your next research question should be
  • how to generate insights that lead to meaningful changes

There are several types of statistical analysis for surveys . The one you choose will depend on what you want to know, what type of data you have, the method of data collection, how much time and resources you have available, and the level of sophistication of your data analysis software.

Learn how Qualtrics iQ can help you with advanced statistical analysis

Before you start

Whichever statistical techniques or methods you decide to use, there are a few things to consider before you begin.

Nail your sampling approach

One of the most important aspects of survey research is getting your sampling technique right and choosing the right sample size . Sampling allows you to study a large population without having to survey every member of it. A sample, if it’s chosen correctly, represents the larger population, so you can study your sample data and then use the results to confidently predict what would be found in the population at large.

There will always be some discrepancy between the sample data and the population, a phenomenon known as sampling error , but with a well-designed study, this error is usually so small that the results are still valuable.

There are several sampling methods, including probabilit y and non-probability sampling . Like statistical analysis, the method you choose will depend on what you want to know, the type of data you’re collecting and practical constraints around what is possible.

Define your null hypothesis and alternative hypothesis

A null hypothesis is a prediction you make at the start of your research process to help define what you want to find out. It’s called a null hypothesis because you predict that your expected outcome won’t happen – that it will be null and void. Put simply: you work to reject, nullify or disprove the null hypothesis.

Along with your null hypothesis, you’ll define the alternative hypothesis, which states that what you expect to happen will happen.

For example, your null hypothesis might be that you’ll find no relationship between two variables, and your alternative hypothesis might be that you’ll find a correlation between them. If you disprove the null hypothesis, either your alternative hypothesis is true or something else is happening. Either way, it points you towards your next research question.

Use a benchmark

Benchmarking is a way of standardizing – leveling the playing field – so that you get a clearer picture of what your results are telling you. It involves taking outside factors into account so that you can adjust the parameters of your research and have a more precise understanding of what’s happening.

Benchmarking techniques use weighting to adjust for variables that may affect overall results. What does that mean? Well for example, imagine you’re interested in the growth of crops over a season. Your benchmarking will take into account variables that have an effect on crop growth, such as rainfall, hours of sunlight, any pests or diseases, type and frequency of fertilizer, so that you can adjust for anything unusual that might have happened, such as an unexpected plant disease outbreak on a single farm within your sample that would skew your results.

With benchmarks in place, you have a reference for what is “standard” in your area of interest, so that you can better identify and investigate variance from the norm.

The goal, as in so much of survey data analysis, is to make sure that your sample is representative of the whole population, and that any comparisons with other data are like-for-like.

Inferential or descriptive?

Statistical methods can be divided into inferential statistics and descriptive statistics.

  • Descriptive statistics shed light on how the data is distributed across the population of interest, giving you details like variance within a group and mean values for measurements.
  • Inferential statistics help you to make judgments and predict what might happen in the future, or to extrapolate from the sample you are studying to the whole population. Inferential statistics are the types of analyses used to test a null hypothesis. We’ll mostly discuss inferential statistics in this guide.

Types of statistical analysis

Regression analysis.

Regression is a statistical technique used for working out the relationship between two (or more) variables.

To understand regressions, we need a quick terminology check:

  • Independent variables are “standalone” phenomena (in the context of the study) that influence dependent variables
  • Dependent variables are things that change as a result of their relationship to independent variables

Let’s use an example: if we’re looking at crop growth during the month of August in Iowa, that’s our dependent variable. It’s affected by independent variables including sunshine, rainfall, pollution levels and the prevalence of certain bugs and pests.

A change in a dependent variable depends on, and is associated with, a change in one (or more) of the independent variables.

  • Linear regression uses a single independent variable to predict an outcome of the dependent variable.
  • Multiple regression uses at least two independent variables to predict the effect on the dependent variable. A multiple regression can be linear or non-linear.

The results from a linear regression analysis are shown as a graph with variables on the axes and a ‘regression curve’ that shows the relationships between them. Data is rarely directly proportional, so there’s usually some degree of curve rather than a straight line.

With this kind of statistical test, the null hypothesis is that there is no relationship between the dependent variable and the independent variable. The resulting graph would probably (though not always) look quite random rather than following a clear line.

Regression is a useful test statistic as you’re able to identify not only whether a relationship is statistically significant, but the precise impact of a change in your independent variable.

linear regression graph

The T-test (aka Student’s T-test) is a tool for comparing two data groups which have different mean values. The T-test allows the user to interpret whether differences are statistically significant or merely coincidental.

For example, do women and men have different mean heights? We can tell from running a t-test that there is a meaningful difference between the average height of a man and the average height of a woman – i.e. the difference is statistically significant.

For this test statistic, the null hypothesis would be that there’s no statistically significant difference.

The results of a T-test are expressed in terms of probability (p-value). If the p-value is below a certain threshold, usually 0.05, then you can be very confident that your two groups really are different and it wasn’t just a chance variation between your sample data.

Analysis of variance (ANOVA) test

Like the T-test, ANOVA (analysis of variance) is a way of testing the differences between groups to see if they’re statistically significant. However, ANOVA allows you to compare three or more groups rather than just two.

Also like the T-test, you’ll start off with the null hypothesis that there is no meaningful difference between your groups.

ANOVA is used with a regression study to find out what effect independent variables have on the dependent variable. It can compare multiple groups simultaneously to see if there is a relationship between them.

An example of ANOVA in action would be studying whether different types of advertisements get different consumer responses. The null hypothesis is that none of them have more effect on the audience than the others and they’re all basically as effective as one another. The audience reaction is the dependent variable here, and the different ads are the independent variables.

Cluster analysis

Cluster analysis is a way of processing datasets by identifying how closely related the individual data points are. Using cluster analysis, you can identify whether there are defined groups (clusters) within a large pool of data, or if the data is continuously and evenly spread out.

Cluster analysis comes in a few different forms, depending on the type of data you have and what you’re looking to find out. It can be used in an exploratory way, such as discovering clusters in survey data around demographic trends or preferences, or to confirm and clarify an existing alternative or null hypothesis.

Cluster analysis is one of the more popular statistical techniques in market research , since it can be used to uncover market segments and customer groups.

Factor analysis

Factor analysis is a way to reduce the complexity of your research findings by trading a large number of initial variables for a smaller number of deeper, underlying ones. In performing factor analysis, you uncover “hidden” factors that explain variance (difference from the average) in your findings.

Because it delves deep into the causality behind your data, factor analysis is also a form of research in its own right, as it gives you access to drivers of results that can’t be directly measured.

Conjoint analysis

Market researchers love to understand and predict why people make the complex choices they do. Conjoint analysis comes closest to doing this: it asks people to make trade-offs when making decisions, just as they do in the real world, then analyses the results to give the most popular outcome.

For example, an investor wants to open a new restaurant in a town. They think one of the following options might be the most profitable:

$20 $40 $60
5 miles 2 miles 10 miles
It’s OK It’s OK Loves it!
It’s cheap, fairly near home, partner is just OK with it It’s a bit more expensive but very near home, partner is just OK with it It’s expensive, quite far from home but partner loves it

The investor commissions market research. The options are turned into a survey for the residents:

  • Which type of restaurant do you prefer? (Gourmet burger/Spanish tapas/Thai)
  • What would you be prepared to spend per head? (£20, $40, £60)
  • How far would you be willing to travel? (5km, 2km, 10km)
  • Would your partner…? (Love it, be OK with it)

There are lots of possible combinations of answers – 54 in this case: (3 restaurant types) x (3 price levels) x (3 distances) x (2 partner preferences). Once the survey data is in, conjoint analysis software processes it to figure out how important each option is in driving customer decisions, which levels for each option are preferred, and by how much.

So, from conjoint analysis , the restaurant investor may discover that there’s a statistically significant preference for an expensive Spanish tapas bar on the outskirts of town – something they may not have considered before.

Crosstab analysis

Crosstab (cross-tabulation) is used in quantitative market research to analyze categorical data – that is, variables that are different and mutually exclusive, such as: ‘men’ and ‘women’, or ‘under 30’ and ‘over 30’.

Also known by names like contingency table and data tabulation, crosstab analysis allows you to compare the relationship between two variables by presenting them in easy-to-understand tables.

A statistical method called chi-squared can be used to test whether the variables in a crosstab analysis are independent or not by looking at whether the differences between them are statistically significant.

Text analysis and sentiment analysis

Analyzing human language is a relatively new form of data processing, and one that offers huge benefits in experience management. As part of the Stats iQ package, TextiQ from Qualtrics uses machine learning and natural language processing to parse and categorize data from text feedback, assigning positive, negative or neutral sentiment to customer messages and reviews.

With this data from text analysis in place, you can then employ statistical tools to analyze trends, make predictions and identify drivers of positive change.

The easy way to run statistical analysis

As you can see, using statistical methods is a powerful and versatile way to get more value from your research data, whether you’re running a simple linear regression to show a relationship between two variables, or performing natural language processing to evaluate the thoughts and feelings of a huge population.

Knowing whether what you notice in your results is statistically significant or not gives you the green light to confidently make decisions and present findings based on your results, since statistical methods provide a degree of certainty that most people recognize as valid. So having results that are statistically significant is a hugely important detail for businesses as well as academics and researchers.

Fortunately, using statistical methods, even the highly sophisticated kind, doesn’t have to involve years of study. With the right tools at your disposal, you can jump into exploratory data analysis almost straight away.

Our Stats iQ™ product can perform the most complicated statistical tests at the touch of a button using our online survey software , or data brought in from other sources. Turn your data into insights and actions with CoreXM and Stats iQ . Powerful statistical analysis. No stats degree required.

Learn how Qualtrics iQ can help you understand the experience like never before

Related resources

Analysis & Reporting

Margin of error 11 min read

Data saturation in qualitative research 8 min read, thematic analysis 11 min read, behavioral analytics 12 min read, statistical significance calculator: tool & complete guide 18 min read, regression analysis 19 min read, data analysis 31 min read, request demo.

Ready to learn more about Qualtrics?

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox , Microsoft Edge , Google Chrome , or Safari 14 or newer. If you are unable to, and need support, please send us your feedback .

We'd appreciate your feedback. Tell us what you think! opens in new tab/window

CRediT author statement

CRediT (Contributor Roles Taxonomy) was introduced with the intention of recognizing individual author contributions, reducing authorship disputes and facilitating collaboration. The idea came about following a 2012 collaborative workshop led by Harvard University and the Wellcome Trust, with input from researchers, the International Committee of Medical Journal Editors (ICMJE) and publishers, including Elsevier, represented by Cell Press.

CRediT offers authors the opportunity to share an accurate and detailed description of their diverse contributions to the published work.

The corresponding author is responsible for ensuring that the descriptions are accurate and agreed by all authors

The role(s) of all authors should be listed, using the relevant above categories

Authors may have contributed in multiple roles

CRediT in no way changes the journal’s criteria to qualify for authorship

CRediT statements should be provided during the submission process and will appear above the acknowledgment section of the published paper as shown further below.

Term

Definition

Conceptualization

Ideas; formulation or evolution of overarching research goals and aims

Methodology

Development or design of methodology; creation of models

Software

Programming, software development; designing computer programs; implementation of the computer code and supporting algorithms; testing of existing code components

Validation

Verification, whether as a part of the activity or separate, of the overall replication/ reproducibility of results/experiments and other research outputs

Formal analysis

Application of statistical, mathematical, computational, or other formal techniques to analyze or synthesize study data

Investigation

Conducting a research and investigation process, specifically performing the experiments, or data/evidence collection

Resources

Provision of study materials, reagents, materials, patients, laboratory samples, animals, instrumentation, computing resources, or other analysis tools

Data Curation

Management activities to annotate (produce metadata), scrub data and maintain research data (including software code, where it is necessary for interpreting the data itself) for initial use and later reuse

Writing - Original Draft

Preparation, creation and/or presentation of the published work, specifically writing the initial draft (including substantive translation)

Writing - Review & Editing

Preparation, creation and/or presentation of the published work by those from the original research group, specifically critical review, commentary or revision – including pre-or postpublication stages

Visualization

Preparation, creation and/or presentation of the published work, specifically visualization/ data presentation

Supervision

Oversight and leadership responsibility for the research activity planning and execution, including mentorship external to the core team

Project administration

Management and coordination responsibility for the research activity planning and execution

Funding acquisition

Acquisition of the financial support for the project leading to this publication

*Reproduced from Brand et al. (2015), Learned Publishing 28(2), with permission of the authors.

Sample CRediT author statement

Zhang San:  Conceptualization, Methodology, Software  Priya Singh. : Data curation, Writing- Original draft preparation.  Wang Wu : Visualization, Investigation.  Jan Jansen :  Supervision. : Ajay Kumar : Software, Validation.:  Sun Qi:  Writing- Reviewing and Editing,

Read more about CRediT  here opens in new tab/window  or check out this  article from  Authors' Updat e:  CRediT where credit's due .

What is digital-twin technology?

Isometric style image of an airplane next to a digitized mirror of itself. A microchip with circuitry sits below them.

What would you do if you had a copy of yourself? A digital doppelgänger, identical to you in every way, in an accurate digital rendering of your home, workplace, neighborhood, or city? Even better: What if the digital version of you—your digital twin—was impervious to injury, pain, or embarrassment? The mind boggles at the possibilities. Suffice it to say, you’d probably be able to make decisions for yourself with a lot more certainty of the outcome.

Get to know and directly engage with senior McKinsey experts on digital-twin technology.

Kimberly Borden is a senior partner in McKinsey’s Chicago office. Anna Herlt is a senior partner in the Munich office. Tomás Lajous , Kayvaun Rowshankish , and Rodney W. Zemmel are senior partners in the New York office.

In business, this heightened degree of certainty is extremely valuable—and emerging digital twins may help deliver it.

Put simply, a digital twin is a virtual replica of a physical object, person, or process that can be used to simulate its behavior to better understand how it works in real life. Digital twins are linked to real data sources from the environment, which means that the twin updates in real time to reflect the original version. Digital twins also comprise a layer of behavioral insights and visualizations derived from data. When interconnected within one system, digital twins can form what’s known as an enterprise metaverse : a digital and often immersive environment that replicates and connects  every aspect of an organization to optimize simulations, scenario planning, and decision making .

There are a few different types of digital twins. First, there’s a product twin, which is a representation of a product . This digital twin can include products at various stages of the life cycle, from initial concept design and engineering through to full functionality—meaning you get live, real-time data on a product as if it’s in service. One great example of a product twin is something you probably already have in your pocket: Google Maps is a digital twin of the Earth’s surface. It links real-time data on traffic to help optimize your commute.

Circular, white maze filled with white semicircles.

Introducing McKinsey Explainers : Direct answers to complex questions

Other types of twins include production plant twins, which represent an entire manufacturing facility, or procurement and supply chain twins, also called network twins. And finally, infrastructure twins represent physical infrastructure such as a highway, a building, or even a stadium.

Digital twins have the potential to deliver more agile  and resilient operations. And their potential isn’t lost on CEOs: McKinsey research indicates that 70 percent  of C-suite technology executives at large enterprises are already exploring and investing in digital twins.

Read on to learn more about the value of digital twins and how to put them to use.

Learn more about McKinsey Digital  and McKinsey’s Operations Practice .

What kind of value can digital twins bring to an organization?

One of the areas where digital twins can bring the most value  is the reduction in time to market. Digital twins can allow for rapid iterations and optimizations of product designs—far faster than physically testing every single prototype. What’s more, digital twins can result in significant improvements in product quality. By simulating the product throughout the manufacturing process, it’s possible to identify flaws in the design much earlier. (“The companies that harness this first,” says McKinsey senior adviser Will Roper, referring to digital-manufacturing twins, “will really shake up  the markets they’re in.”) And finally, by mirroring a product in service, it’s possible to create a single source of truth for how the design is functioning, allowing for real-time adjustment or redesign.

McKinsey has seen organizations post revenue increases of as much as 10 percent by developing digital twins of their customers, allowing those customers to fully interact and immerse themselves within a company’s product. Daimler, for example, has developed customer twins that allow customers to “test drive” a vehicle without ever getting behind the wheel.

How can digital twins affect an organization’s environmental sustainability?

Product digital twins can be particularly helpful in improving sustainability efforts. These digital twins can help organizations reduce the material used in a product’s design, as well as improve the traceability of a product to reduce environmental waste. Consumer electronics manufacturers have made significant improvements to sustainability by using digital twins, reducing scrap waste by roughly 20 percent .

Learn more about McKinsey’s Operations Practice .

How can an organization get started on building its first digital twin?

A key element an organization needs for implementing digital twins is digital maturity . This essentially means data: a high-quality data infrastructure that delivers reliable data from both testing and live environments, as well as the talent needed to build and maintain that infrastructure.

But that doesn’t have to mean that organizations need a complex or dynamic environment to benefit from a digital twin. Some companies are seeing success twinning products as simple as toothbrushes to gain real-time customer feedback. Then, once the initial use case is established, organizations can add more layers of information and real-time feedback to further improve the twin.

Building and scaling a digital twin requires a three-step approach:

  • Create a blueprint . A blueprint should define the types of twins an organization will pursue, the order for building them to maximize value and reusability, the way their capabilities will evolve, and their ownership and governance structures.
  • Build the base digital twin . A project team then builds the base digital twin over the next three to six months. This phase begins with assembling the core data product, which enables the development of visualizations and allows data science professionals to build out one or two initial use cases.
  • Boost capabilities . Once a digital twin is running, an organization can expand its capabilities by adding more data layers and analytics to support new use cases. At this stage, organizations frequently advance their twins from simply representing assets, people, or processes to providing simulations and prescriptions through the use of AI and advanced modeling techniques.

What does the journey from digital twin to enterprise metaverse look like?

Companies can start their journey to an enterprise metaverse with one digital twin, modeled after one data product. A data product delivers a high-quality, ready-to-use data set that people across an organization can easily access and apply. It should be a single, reusable source of truth, enhanced over time, that can serve as the basis for future use cases. Eventually, a first digital twin can evolve based on learnings  from behavioral data, ultimately providing increasingly powerful predictive capabilities.

From there, an organization can create multiple interconnected digital twins  to simulate the complex relationships between different entities. This can generate richer behavior insights for even more sophisticated use cases—and greater value. For example, an organization might connect a digital twin of its customers with retail stores, inventory, sales, and customer process flows. In doing this, the organization could carry out the following:

  • simulate end-to-end impact of business and market changes on retail stores
  • create a true omnichannel  experience that provides for seamless pause-and-resume customer journeys across channels
  • optimize store layouts by responding to shifts in customer preferences
  • assess different compensation and staffing models by sales, employee performance, and the characteristics of local stores

Finally, an organization with a healthy digital-twin network can layer on additional technologies required to create an enterprise metaverse. A retailer could, for instance, connect the digital twin of its retail store to digital twins of its warehouses, supply chain, call center, and more until every part of the organization has been replicated.

How are some companies already using digital twins?

Interest in digital twins, combined with rapidly advancing supportive technologies, is spurring market estimates for digital-twin investments of more than $48 billion by 2026 . We’re already seeing some advanced implementations:

  • Emirates Team New Zealand. A digital twin of sailing environments, boats, and crew members enables  Emirates Team New Zealand to test boat designs without actually building them. This has allowed the champion sailing team to evaluate thousands—rather than just hundreds—of hydrofoil designs.
  • Anheuser-Busch InBev. A brewing and supply chain digital twin enables brewers to adjust inputs based on active conditions and can automatically compensate for production bottlenecks (for instance, when vats are full).
  • SoFi Stadium. To help optimize stadium management and operations, a digital twin aggregates multiple data sources including information about the stadium’s structure and real-time football data.
  • Space Force. This branch of the US Armed Forces is creating a digital twin of space, including replicas of extraterrestrial bodies and satellites.
  • SpaceX. A digital twin of the SpaceX’s Dragon capsule spacecraft enables operators to monitor and adjust trajectories, loads, and propulsion systems with the goal of maximizing safety and reliability during transport.

Learn more about McKinsey Digital  and McKinsey’s Operations Practice , and explore digital-operations-related job opportunities if you’re interested in working at McKinsey.

Articles referenced:

  • “ Digital twins: Flying high, flexing fast ,” November 16, 2022, Kimberly Borden
  • “ Digital twins: What could they do for your business? ,” October 3, 2022, Kimberly Borden  and Anna Herlt
  • “ Digital twins: The foundation of the enterprise metaverse ,” October 2022, Joshan Abraham, Guilherme Cruz, Sebastian Cubela, Tomás Lajous , Kayvaun Rowshankish , Sanchit Tiwari, and Rodney Zemmel
  • “ Digital twins: The art of the possible in product development and beyond ,” April 28, 2022, Mickael Brossard , Sebastien Chaigne, Jacomo Corbo, Bernhard Mühlreiter , and Jan Paul Stein
  • “ Flying across the sea, propelled by AI ,” March 17, 2021

Isometric style image of an airplane next to a digitized mirror of itself. A microchip with circuitry sits below them.

Want to know more about digital-twin technology?

Related articles.

3D layered shape cyborg head on neon colored background

What is the metaverse?

""

Digital twins: The foundation of the enterprise metaverse

" "

Digital twins: Flying high, flexing fast

IMAGES

  1. What Is Data Analysis In Research Process

    data analysis procedure in research example

  2. Data Analysis in research methodology

    data analysis procedure in research example

  3. Your Guide to Qualitative and Quantitative Data Analysis Methods

    data analysis procedure in research example

  4. (PDF) Data analysis in qualitative research

    data analysis procedure in research example

  5. Illustration of the data analysis procedure.

    data analysis procedure in research example

  6. FREE 10+ Qualitative Data Analysis Samples in PDF

    data analysis procedure in research example

VIDEO

  1. Data Analysis in Research Methodology

  2. How to do Linear Regression Using MS Excel Data Analysis Procedure

  3. Data Analysis in Research

  4. SPSS 27 Training Day-15: Two-Group Mean Comparison in SPSS 27 using independent samples t-test

  5. A level Physics : P5 Past paper Walk through 7

  6. How to present research tools, procedures and data analysis techniques

COMMENTS

  1. Data Analysis in Research: Types & Methods

    Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. Three essential things occur during the data ...

  2. Data Analysis Techniques In Research

    Importance of Data Analysis in Research. The importance of data analysis in research cannot be overstated; it serves as the backbone of any scientific investigation or study. Here are several key reasons why data analysis is crucial in the research process: Data analysis helps ensure that the results obtained are valid and reliable.

  3. Data Analysis

    Data Analysis. Definition: Data analysis refers to the process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, drawing conclusions, and supporting decision-making. It involves applying various statistical and computational techniques to interpret and derive insights from large datasets.

  4. What Is Data Analysis? (With Examples)

    What Is Data Analysis? (With Examples) Data analysis is the practice of working with data to glean useful information, which can then be used to make informed decisions. "It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts," Sherlock Holme's proclaims ...

  5. A practical guide to data analysis in general literature reviews

    The data analysis process for these kinds of quantitative studies involves three steps: 1. Identifying data that answers your research question (aim), in this case largely numerical data that must be 'dug' out of each study's results. ... Thanks to Malin Sandström and Helena Welch for the example research project of PVC insertion in ...

  6. A Really Simple Guide to Quantitative Data Analysis

    nominal. It is important to know w hat kind of data you are planning to collect or analyse as this w ill. affect your analysis method. A 12 step approach to quantitative data analysis. Step 1 ...

  7. Data Collection

    Data Collection | Definition, Methods & Examples. Published on June 5, 2020 by Pritha Bhandari.Revised on June 21, 2023. Data collection is a systematic process of gathering observations or measurements. Whether you are performing research for business, governmental or academic purposes, data collection allows you to gain first-hand knowledge and original insights into your research problem.

  8. The Beginner's Guide to Statistical Analysis

    To draw valid conclusions, statistical analysis requires careful planning from the very start of the research process. You need to specify your hypotheses and make decisions about your research design, sample size, and sampling procedure. After collecting data from your sample, you can organize and summarize the data using descriptive statistics.

  9. Learning to Do Qualitative Data Analysis: A Starting Point

    On the basis of Rocco (2010), Storberg-Walker's (2012) amended list on qualitative data analysis in research papers included the following: (a) the article should provide enough details so that reviewers could follow the same analytical steps; (b) the analysis process selected should be logically connected to the purpose of the study; and (c ...

  10. PDF The SAGE Handbook of Qualitative Data Analysis

    Data analysis is the central step in qualitative research. Whatever the data are, it is their analysis that, in a decisive way, forms the outcomes of the research. Sometimes, data collection is limited to recording and docu-menting naturally occurring phenomena, for example by recording interactions. Then qualitative research is concentrated on ...

  11. Chapter 3

    10 Examples of Effective Experiment Design and Data Analysis in Transportation Research About this Chapter This chapter provides a wide variety of examples of research questions. The examples demon- strate varying levels of detail with regard to experiment designs and the statistical analyses required.

  12. Data Analysis in Research

    There are two major types of data analysis methods that are used in research: qualitative analysis, which is characteristics-focused, and quantitative analysis, which is numbers-focused. Within ...

  13. Qualitative Data Analysis: Step-by-Step Guide (Manual vs ...

    Step 1: Gather your qualitative data and conduct research (Conduct qualitative research) The first step of qualitative research is to do data collection. Put simply, data collection is gathering all of your data for analysis. A common situation is when qualitative data is spread across various sources.

  14. Data Analysis: Types, Methods & Techniques (a Complete List)

    Example Data Set of Fruit. Now let's turn to types, methods, and techniques. ... descriptive analysis is a subtype of mathematical data analysis that uses methods and techniques to provide information about the size, dispersion, groupings, and behavior of data sets. ... Types of data analysis in research methodology include every item ...

  15. PDF A Step-by-Step Guide to Qualitative Data Analysis

    Step 1: Organizing the Data. "Valid analysis is immensely aided by data displays that are focused enough to permit viewing of a full data set in one location and are systematically arranged to answer the research question at hand." (Huberman and Miles, 1994, p. 432) The best way to organize your data is to go back to your interview guide.

  16. What is data analysis? Examples and how to start

    Data analysis is the process of examining, filtering, adapting, and modeling data to help solve problems. Data analysis helps determine what is and isn't working, so you can make the changes needed to achieve your business goals. Keep in mind that data analysis includes analyzing both quantitative data (e.g., profits and sales) and qualitative ...

  17. Unraveling Data Analysis: Process, Benefits, Examples & Tools

    Step 4: Analyze data. Data analysis is a crucial step in any research or business project. It helps to make sense of the raw data collected and draw meaningful insights from it. The fourth step in data analysis involves analyzing the data to uncover patterns, trends, and relationships within the dataset.

  18. Basic statistical tools in research and data analysis

    Abstract. Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise ...

  19. Data Analysis in Qualitative Research: A Brief Guide to Using Nvivo

    Data analysis in qualitative research is defined as the process of systematically searching and arranging the interview transcripts, observation notes, or other non-textual materials that the researcher accumulates to increase the understanding of the phenomenon.7 The process of analysing qualitative data predominantly involves coding or ...

  20. (PDF) Chapter 3 Research Design and Methodology

    Research Design and Methodology. Chapter 3 consists of three parts: (1) Purpose of the. study and research design, (2) Methods, and (3) Statistical. Data analysis procedure. Part one, Purpose of ...

  21. (PDF) Practical Data Analysis: An Example

    18 2 Practical Data Analysis: An Example. Fig. 2.1 A histogram for the distribution of the value of attribute age using 8 bins. Fig. 2.2 A histogram for the distribution of the value of attribute ...

  22. Qualitative vs Quantitative Research Methods & Data Analysis

    The time required for data collection, analysis, and interpretation is lengthy. Analysis of qualitative data is difficult, and expert knowledge of an area is necessary to interpret qualitative data. Great care must be taken when doing so, for example, looking for mental illness symptoms. Advantages of Qualitative Research

  23. Qualitative vs. Quantitative Data in Research: The Difference

    Qualitative data is subjective and unique. Quantitative research methods are measuring and counting. Qualitative research methods are interviewing and observing. Quantitative data is analyzed using statistical analysis. Qualitative data is analyzed by grouping the data into categories and themes.

  24. Survey Statistical Analysis Methods

    Survey statistical analysis methods . 16 min read Get more from your survey results with tried and trusted statistical tests and analysis methods. The kind of data analysis you choose depends on your survey data, so it makes sense to understand as many statistical analysis options as possible. Here's a one-stop guide.

  25. What Is Artificial Intelligence? Definition, Uses, and Types

    Artificial intelligence (AI) is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such as recognizing speech, making decisions, and identifying patterns. AI is an umbrella term that encompasses a wide variety of technologies, including machine learning, deep learning, and ...

  26. CRediT author statement

    Conducting a research and investigation process, specifically performing the experiments, or data/evidence collection ... reagents, materials, patients, laboratory samples, animals, instrumentation, computing resources, or other analysis tools. Data Curation. Management activities to annotate (produce metadata), scrub data and maintain research ...

  27. What is digital-twin technology?

    Put simply, a digital twin is a virtual replica of a physical object, person, or process that can be used to simulate its behavior to better understand how it works in real life. Digital twins are linked to real data sources from the environment, which means that the twin updates in real time to reflect the original version.