Popular searches

  • How to Get Participants For Your Study
  • How to Do Segmentation?
  • Conjoint Preference Share Simulator
  • MaxDiff Analysis
  • Likert Scales
  • Reliability & Validity

Request consultation

Do you need support in running a pricing or product study? We can help you with agile consumer research and conjoint analysis.

Looking for an online survey platform?

Conjointly offers a great survey tool with multiple question types, randomisation blocks, and multilingual support. The Basic tier is always free.

Research Methods Knowledge Base

  • Navigating the Knowledge Base
  • Foundations
  • Construct Validity
  • Reliability
  • Levels of Measurement
  • Survey Research
  • Scaling in Measurement
  • Qualitative Measures
  • Unobtrusive Measures
  • Research Design
  • Table of Contents

Fully-functional online survey tool with various question types, logic, randomisation, and reporting for unlimited number of surveys.

Completely free for academics and students .

Measurement

Measurement is the process of observing and recording the observations that are collected as part of a research effort. There are two major issues that will be considered here.

First, you have to understand the fundamental ideas involved in measuring. Here we consider two of major measurement concepts. In Levels of Measurement , I explain the meaning of the four major levels of measurement: nominal, ordinal, interval and ratio. Then we move on to the reliability of measurement, including consideration of true score theory and a variety of reliability estimators.

Second, you have to understand the different types of measures that you might use in social research. We consider four broad categories of measurements. Survey research includes the design and implementation of interviews and questionnaires. Scaling involves consideration of the major methods of developing and implementing a scale. Qualitative research provides an overview of the broad range of non-numerical measurement approaches. And unobtrusive measures presents a variety of measurement methods that don’t intrude on or interfere with the context of the research.

Cookie Consent

Conjointly uses essential cookies to make our site work. We also use additional cookies in order to understand the usage of the site, gather audience analytics, and for remarketing purposes.

For more information on Conjointly's use of cookies, please read our Cookie Policy .

Which one are you?

I am new to conjointly, i am already using conjointly.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

4.1: What is Measurement?

  • Last updated
  • Save as PDF
  • Page ID 124458

Learning Objective

  • Define measurement.

Measurement

Measurement is important. Recognizing that fact, and respecting it, will be of great benefit to you—both in research methods and in other areas of life as well. If, for example, you have ever baked a cake, you know well the importance of measurement. As someone who much prefers rebelling against precise rules over following them, I once learned the hard way that measurement matters. A couple of years ago I attempted to bake my husband a birthday cake without the help of any measuring utensils. I’d baked before, I reasoned, and I had a pretty good sense of the difference between a cup and a tablespoon. How hard could it be? As it turns out, it’s not easy guesstimating precise measures. That cake was the lumpiest, most lopsided cake I’ve ever seen. And it tasted kind of like Play-Doh. Figure 4.1 depicts the monstrosity I created, all because I did not respect the value of measurement.

what is measurement in research methodology

Measurement is important in baking and in research.

Just as measurement is critical to successful baking, it is as important to successfully pulling off a social scientific research project. In sociology, when we use the term measurement we mean the process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena that we are investigating. At its core, measurement is about defining one’s terms in as clear and precise a way as possible. Of course, measurement in social science isn’t quite as simple as using some predetermined or universally agreed-on tool, such as a measuring cup or spoon, but there are some basic tenants on which most social scientists agree when it comes to measurement. We’ll explore those as well as some of the ways that measurement might vary depending on your unique approach to the study of your topic.

What Do Social Scientists Measure?

The question of what social scientists measure can be answered by asking oneself what social scientists study. Think about the topics you’ve learned about in other sociology classes you’ve taken or the topics you’ve considered investigating yourself. Or think about the many examples of research you’ve read about in this text. Classroom learning environments and the mental health of first grade children. Journal of Health and Social Behavior, 52 , 4–22. of first graders’ mental health. In order to conduct that study, Milkie and Warner needed to have some idea about how they were going to measure mental health. What does mental health mean, exactly? And how do we know when we’re observing someone whose mental health is good and when we see someone whose mental health is compromised? Understanding how measurement works in research methods helps us answer these sorts of questions.

As you might have guessed, social scientists will measure just about anything that they have an interest in investigating. For example, those who are interested in learning something about the correlation between social class and levels of happiness must develop some way to measure both social class and happiness. Those who wish to understand how well immigrants cope in their new locations must measure immigrant status and coping. Those who wish to understand how a person’s gender shapes their workplace experiences must measure gender and workplace experiences. You get the idea. Social scientists can and do measure just about anything you can imagine observing or wanting to study.

How Do Social Scientists Measure?

Measurement in social science is a process. It occurs at multiple stages of a research project: in the planning stages, in the data collection stage, and sometimes even in the analysis stage. Recall that previously we defined measurement as the process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena that we are investigating. Once we’ve identified a research question, we begin to think about what some of the key ideas are that we hope to learn from our project. In describing those key ideas, we begin the measurement process.

Let’s say that our research question is the following: How do new college students cope with the adjustment to college? In order to answer this question, we’ll need to some idea about what coping means. We may come up with an idea about what coping means early in the research process, as we begin to think about what to look for (or observe) in our data-collection phase. Once we’ve collected data on coping, we also have to decide how to report on the topic. Perhaps, for example, there are different types or dimensions of coping, some of which lead to more successful adjustment than others. However we decide to proceed, and whatever we decide to report, the point is that measurement is important at each of these phases.

As the preceding paragraph demonstrates, measurement is a process in part because it occurs at multiple stages of conducting research. We could also think of measurement as a process because of the fact that measurement in itself involves multiple stages. From identifying one’s key terms to defining them to figuring out how to observe them and how to know if our observations are any good, there are multiple steps involved in the measurement process. An additional step in the measurement process involves deciding what elements one’s measures contain. A measure’s elements might be very straightforward and clear, particularly if they are directly observable. Other measures are more complex and might require the researcher to account for different themes or types. These sorts of complexities require paying careful attention to a concept’s level of measurement and its dimensions.

KEY TAKEAWAYS

  • Measurement is the process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena that we are investigating.
  • Measurement occurs at all stages of research.

what is measurement in research methodology

Yearly paid plans are up to 65% off for the spring sale. Limited time only! 🌸

  • Form Builder
  • Survey Maker
  • AI Form Generator
  • AI Survey Tool
  • AI Quiz Maker
  • Store Builder
  • WordPress Plugin

what is measurement in research methodology

HubSpot CRM

what is measurement in research methodology

Google Sheets

what is measurement in research methodology

Google Analytics

what is measurement in research methodology

Microsoft Excel

what is measurement in research methodology

  • Popular Forms
  • Job Application Form Template
  • Rental Application Form Template
  • Hotel Accommodation Form Template
  • Online Registration Form Template
  • Employment Application Form Template
  • Application Forms
  • Booking Forms
  • Consent Forms
  • Contact Forms
  • Donation Forms
  • Customer Satisfaction Surveys
  • Employee Satisfaction Surveys
  • Evaluation Surveys
  • Feedback Surveys
  • Market Research Surveys
  • Personality Quiz Template
  • Geography Quiz Template
  • Math Quiz Template
  • Science Quiz Template
  • Vocabulary Quiz Template

Try without registration Quick Start

Read engaging stories, how-to guides, learn about forms.app features.

Inspirational ready-to-use templates for getting started fast and powerful.

Spot-on guides on how to use forms.app and make the most out of it.

what is measurement in research methodology

See the technical measures we take and learn how we keep your data safe and secure.

  • Integrations
  • Help Center
  • Sign In Sign Up Free
  • Researcher's guide to 4 measurement scales: Nominal, ordinal, interval, ratio

Researcher's guide to 4 measurement scales: Nominal, ordinal, interval, ratio

Understanding the type of data you're working on is crucial for your data gathering and analysis. Data can be qualitative or quantitative , coming from a variety of sources. Data that is descriptive and extensive is referred to as qualitative data . Numerical data is quantitative . 

There will be several factors in your dataset that can be captured with different degrees of accuracy. There are four primary levels of measurement scale type: nominal , ordinal , interval , and ratio . This article will explain the definition of a scale of measurement, a description of four scales and examples, and the best question types for measurement scales in research.

  • What is a scale of measurement?

The scales of measurement show the accuracy with which variables are captured. A scale of measurement is a system or framework for classifying and quantifying mutable characteristics. Numbers or categories express the features of the variables evaluated in this technique.  

The scale of measurement,  also known as the level of measurement , describes the accuracy level that may be achieved while recording data. These four measuring scales were created by  Stanley Smith Stevens  in  1946 .

The four main measuring scales are nominal, ordinal, interval, and ratio. These levels are listed in increasing order of the detailed information they provide. The complexity and accuracy of the degree of measurement are ranked from low ( nominal ) to high ( ratio ) in a hierarchy.

The four main measuring scales

The four main measuring scales

  • 1. Nominal scale

The nominal scale is the first level of measurement . Numbers are used solely for identifying an object . This assessment addresses only non-numeric factors or situations where numbers have no meaning. The nominal scale is the simplest of the four variable measuring scales.

Your data can be categorized by grouping them into mutually exclusive labels; however, there is no hierarchy among the categories. This scale’s variable numbers are only labels for grouping or dividing the variables . If these values have no quantitative meaning , making any calculations based on them is useless. 

Nominal scale examples

Notice that all the scales are mutually exclusive, and none have any numerical significance. Some examples are given below to better explain the nominal scale. 

#1 - What is your favorite car brand?

  •  Toyota
  • Mercedes-Benz

Only the brand names are significant to the poll's consumer researcher in this question. For these brands, there is no need for a particular order . When collecting nominal data, however, researchers examine the data using pertinent tags .

When a survey respondent chooses Toyota as their favorite brand, the selected option will be “1” in the measurement scale example above. This made it easier to quantify and respond to the last query, which asked how many respondents selected Toyota, Audi, and Ford , and which was the highest. The nominal scale is the most fundamental research scale and is the foundation for quantitative research.  

  • 2. Ordinal scale

The ordinal scale, the second measurement level , reports the ranking and ordering of the data without determining the degree of variance among them . Ordinal data is quantitative information with naturally existing orders , and how they vary is uncertain. 

An ordinal scale is a variable measurement scale in statistics used to show the order of variables rather than the differences between them. Generally, these scales represent non-mathematical concepts like pleasure , happiness , and frequency .

Ordinal scale examples

The ordinal scale often measures satisfaction , preferences , frequency , and agreement . Below, we have shared a sample question that a health research company used to measure the monthly exercise frequency of individuals. You can better understand the ordinal scale by examining the example below:

#2 - How often do you exercise?

Researchers can use the ordinal scale to acquire more data than they would with a nominal scale. The order of the answer options in this case, as well as their labeling, is crucial. The researcher finds it convenient to analyze the findings according to the results’ order and name.

  • 3. Interval scale

The interval scale is the third of four measurement level s. The interval scale is a quantitative measuring scale with order, significant and equal differences between the two variables , and arbitrary zero presence .

The interval scale measures variables along a standard scale at equal intervals. The measures used to calculate the distance between the variables are highly reliable. These scales are effective as they open doors for the statistical analysis of provided data. 

Interval scale examples

Interval measurements can be used to assign metrics to data values. No real zero point exists , but you may categorize , rank , and infer equal gaps between adjacent data points. The following example may make the interval scale easier to understand. 

The Celsius and Fahrenheit temperature scale is a well-known example of an interval scale since “0” is arbitrary because negative temperature values can exist . The difference between these two temperatures, 50 and 30 degrees, is equal to the difference between 30 and 10 degrees; hence 50 is always higher than 30 .  

  • 4. Ratio scale

The ratio scale is the fourth level of measurement in research and has a zero point or character of origin . A ratio scale of measurement is quantitative , with absolute zero and equal gaps between nearby points. The ratio scale has no negative values since there is a zero value.

Researchers can use the ratio scale to work on data by determining the mean, median, and mode for a central tendency. The ratio scale cannot be “0” because of zero. When a researcher wants to utilize a ratio scale, he must consider the qualities of the variable and if it has all of the required features. 

Ratio scale examples

Physical properties of people and objects may be quantified using ratio scales; consequently, height, weight, and kilogram calories are instances of ratio measurement. Below are examples of ratio scales; you can better understand the ratio scale by looking at these examples:

# 3 - What is your current weight in kilograms?  

  • Less than 49 kilograms
  • 50- 69 kilograms
  • 70- 89 kilograms
  • 90- 109 kilograms
  • More than 110 kilograms

Weight in kilograms is an excellent example of ratio data. If something weighs 0 kilos, it weighs nothing, especially when compared to temperature, where a figure of zero degrees represents no temperature but rather extremely cold .

  • Best question types for measurement scales

The question type determines the data type created by a survey, and the data type limits the data analysis that may be conducted. The most frequent survey question is an interval rating scale question, which we utilize to record the respondent’s degree of sentiment about the issue of interest.   

Ordinal questions commonly feature a list of response alternatives , but the answer list is distinguished by the fact that the response options are ordered somehow. The ratio scale questions occur when respondents are requested to answer to a physical measure . Here are the best question types listed for measurement scales:

1 - Open-ended questions

Asking open-ended questions is a great way to learn more about a topic. When you request an open question, you allow the other person to elaborate and provide a detailed explanation. A nominal scale can be used for open-ended question type. You can ask how , what , why , when , explain , and describe questions like the example below.

  • What is your favorite color?

Open-ended question example

Open-ended question example

2 - Rating questions

Rating questions enable participants to weigh or assign numerical values to responses using a graphical interface, employing a basic 1-5 star rating system or 0-100 slider scale where a higher number equals a better score. Ordinal scales and interval scales can be used for rating question types.

  • How likely are you to recommend this product to a friend? 

Rating question example

Rating question example

3 - Likert scale questions

Likert scale questions are critical in determining a respondent's opinion or attitude toward a particular issue and are essential to market research. A Likert is a five, seven, or nine-point agreement scale used to assess respondents’ agreement with various claims. Interval scales and ordinal scales can be used for Likert scale question types. 

  • What was your level of satisfaction with our product?

- Extremely satisfied

- Very satisfied

- A little satisfied

- A little dissatisfied

- Not satisfied at all

Likert scale question example

Likert scale question example

4 - Multiple choice questions

Multiple-choice questions are classic surveys because they provide respondents with multiple choices. They can contain single or multi-select options . Ratio scales and nominal scales can be used for multiple-choice questions:

  • How much time do you spend on social media per day?

- Less than 1 hour

- 2-3 hours

- 3-4 hours

- 4-6 hours

- More than 6 hours

Ratio scale multiple-choice question example

Ratio scale multiple-choice question example

What is your favorite phone brand? 

Nominal scale multiple-choice question example

Nominal scale multiple-choice question example

  • Frequently asked questions about levels of measurement

Is Likert scale ordinal or nominal?

The Likert scale is a type of ordinal scale. The Likert scale assesses respondents’ attitudes or levels of agreement and disagreement on a subject by asking them to select from a list of answer alternatives such as “ strongly agree ,” “ agree ,” “ neutral ,” “ disagree ,” and “ strongly disagre e.” The selections are arranged in a positive to negative sequence, reflecting a rating of views or levels of agreement.

Nominal scale vs. Interval scale

A nominal scale is a measuring scale that divides variables into different categories or groups with no regard for the order or size of the types . On the other hand, an interval scale is a measuring scale that categorizes variables and measures the magnitude or distance between them meaningfully and consistently . 

The significant distinction between a nominal scale and an interval scale is that a nominal scale does not quantify the distance or size between the categories , but an interval scale does.

Nominal scale vs. ratio scale

The nominal scale is the most basic measure for categorizing data into groups or categories. In contrast, the ratio scale is a more sophisticated measuring scale that categorizes data, specifies the order, and creates meaningful gaps between values.

The ratio scale offers a more accurate variable measurement with meaningful intervals and a zero point , allowing for mathematical operations , as opposed to the nominal scale, which categorizes data into categories without any order or ranking.

Are Likert scales ordinal or interval?

Ordinal and interval are differentiated based on the particular requirements of the analysis being done. Because the items in Likert scale inquiries have a distinct rank order but an uneven distribution, Likert scales are typically regarded as ordinal data .

Sometimes, the overall Likert scale scores are seen as interval data . These scores have equal spacing between them and directionality.

Ordinal vs. interval scale

Ordinal and interval scales are two of the four primary categories of data or classifications. Both data kinds accommodate the necessity to categorize and communicate information . Data quantities are also measured in terms of ordinal and interval data.

Ordinal scales have unordered categories with a defined order or ranking , but interval scales have ordered categories with equally measured intervals between them. This is the primary distinction between ordinal and interval scales.

Nominal scale vs. ordinal scale

A nominal scale is a scale in which variables are only “ named ” or “ labeled ” without regard to their order. Beyond merely identifying them, the variables on an ordinal scale have a precise order.

A nominal scale is a measuring scale that divides data into unique, unrelated groups without any innate hierarchy or order . On the contrary, an ordinal scale is a measuring scale that ranks or orders data according to some trait or feature .

Interval scale vs. ratio scale

The interval scale and the ratio scale are two quantitative scales that are used to quantify variables in research or statistics. While they have certain similarities, they also have significant distinctions. 

The interval scale lacks a genuine zero point , whereas the ratio scale has one. When compared to variables measured on an interval scale, this distinction enables more complex mathematical operations on variables recorded on a ratio scale.

How is the interval scale used?

The interval scale provides a controlled and regulated method for collecting, analyzing, and interpreting data, which aids in the generation of insights and the making of informed decisions in a variety of sectors. 

Interval scales work well in surveys where respondents must provide temperature , time , and date variables . Interval scales may be readily integrated into multiple-choice or rating scale questions by asking respondents to rate using a numerical scale. Here are some common uses of interval scale:

  • Surveys and questionnaires: The interval scale is extensively used in the design and administration of surveys or questionnaires. You can use an interval scale when create a survey. Respondents can score or rank objects on a scale, which allows researchers to obtain quantitative data for the study.
  • Market research: In market research, an interval scale is used to assess client preferences, satisfaction levels, and buy intent. It aids in the analysis and interpretation of client feedback or ratings, helping organizations to make educated decisions. 
  • Measurement and quantification: By giving numerical values to distinct properties or variables, the interval scale provides for exact measurement and quantification of data.

Sena is a content writer at forms.app. She likes to read and write articles on different topics. Sena also likes to learn about different cultures and travel. She likes to study and learn different languages. Her specialty is linguistics, surveys, survey questions, and sampling methods.

  • Form Features
  • Data Collection

Table of Contents

Related posts.

50+ Job interview statistics to get ideas

50+ Job interview statistics to get ideas

Fatih Özkan

Are customer service surveys effective?

Are customer service surveys effective?

5 consent form examples that will save your day

5 consent form examples that will save your day

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10.1 What is measurement?

Learning objectives.

Learners will be able to…

  • Define measurement
  • Explain where measurement fits into the process of designing research
  • Apply Kaplan’s three categories to determine the complexity of measuring a given variable

Pre-awareness check (Knowledge)

What do you already know about measuring key variables in your research topic?

In social science, when we use the term  measurement , we mean the process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena that we are investigating. In this chapter, we’ll use the term “concept” to mean an abstraction that has meaning. Concepts can be understood from our own experiences or from particular facts, but they don’t have to be limited to real-life phenomenon. We can have a concept of anything we can imagine or experience such as weightlessness, friendship, or income. Understanding exactly what our concepts mean is necessary in order to measure them.

In research, measurement is a systematic procedure for assigning scores, meanings, and descriptions to concepts so that those scores represent the characteristic of interest. Social scientists can and do measure just about anything you can imagine observing or wanting to study. Of course, some things are easier to observe or measure than others.

Where does measurement fit in the process of designing research?

Table 10.1 is intended as a partial review and outlines the general process researchers can follow to get from problem formulation to data collection, including measurement. Use the drop down feature in the table to view the examples for each component of the research process. Keep in mind that this process is iterative. For example, you may find something in your literature review that leads you to refine your conceptualizations, or you may discover as you attempt to conceptually define your terms that you need to return back to the literature for further information. Accordingly, this table should be seen as a suggested path to take rather than an inflexible rule about how research must be conducted.

Table 10.1. Components of the Research Process from Problem Formulation to Data Collection. Note. Information on attachment theory in this table came from: Bowlby, J. (1978). Attachment theory and its therapeutic implications. Adolescent Psychiatry, 6 , 5-33

Categories of concepts that social scientists measure

In 1964, philosopher Abraham Kaplan (1964) [1] wrote The Conduct of Inquiry , which has been cited over 8,500 times. [2] In his text, Kaplan describes different categories of things that behavioral scientists observe. One of those categories, which Kaplan called “observational terms,” is probably the simplest to measure in social science. Observational terms are simple concepts. They are the sorts of things that we can see with the naked eye simply by looking at them. Kaplan roughly defines them as concepts that are easy to identify and verify through direct observation. If, for example, we wanted to know how the conditions of playgrounds differ across different neighborhoods, we could directly observe the variety, amount, and condition of equipment at various playgrounds.

Indirect observables , on the other hand, are less straightforward concepts to assess. In Kaplan’s framework, they are conditions that are subtle and complex that we must use existing knowledge and intuition to define. If we conducted a study for which we wished to know a person’s income, we’d probably have to ask them their income, perhaps in an interview or a survey. Thus, we have observed income, even if it has only been observed indirectly. Birthplace might be another indirect observable. We can ask study participants where they were born, but chances are good we won’t have directly observed any of those people being born in the locations they report.

Sometimes the concepts that we are interested in are more complex and more abstract than observational terms or indirect observables. Because they are complex, constructs generally consist of more than one concept. Let’s take for example, the construct “bureaucracy.” We know this term has something to do with hierarchy, organizations, and how they operate but measuring such a construct is trickier than measuring something like a person’s income because of the complexity involved. Here’s another construct: racism. What is racism? How would you measure it? The constructs of racism and bureaucracy represent constructs whose meanings we have come to agree on.

Though we may not be able to observe constructs directly, we can observe their components. In Kaplan’s categorization, constructs are concepts that are “not observational either directly or indirectly” (Kaplan, 1964, p. 55), [3] but they can be defined based on observables. An example would be measuring the construct of depression. A diagnosis of depression can be made through the DSM-V which includes diagnostic criteria of fatigue, poor concentration, etc. Each of these components of depression can be observed indirectly. We are able to measure constructs by defining them in terms of what we can observe. Though we may not be able to observe them, we can observe their components.

TRACK 1 (IF YOU ARE CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

Look at the variables in your research question.

  • Classify them as direct observables, indirect observables, or constructs.
  • Do you think measuring them will be easy or hard?
  • What are your first thoughts about how to measure each variable? No wrong answers here, just write down a thought about each variable.

TRACK 2 (IF YOU AREN’T CREATING A RESEARCH PROPOSAL FOR THIS CLASS): 

You are interested in studying older adults’ social-emotional well-being. Specifically, you would like to research the impact on levels of older adult loneliness of an intervention that pairs older adults living in assisted living communities with university student volunteers for a weekly conversation.

Develop a working research question for this topic. Then, look at the variables in your research question.

  • Kaplan, A. (1964). The conduct of inquiry: Methodology for behavioral science. San Francisco, CA: Chandler Publishing Company. ↵
  • Earl Babbie offers a more detailed discussion of Kaplan’s work in his text. You can read it in: Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. ↵
  • Kaplan, A. (1964). The conduct of inquiry: Methodology for behavioral science . San Francisco, CA: Chandler Publishing Company. ↵

The process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena under investigation in a research study.

In measurement, conditions that are easy to identify and verify through direct observation.

things that require subtle and complex observations to measure, perhaps we must use existing knowledge and intuition to define.

Conditions that are not directly observable and represent states of being, experiences, and ideas.

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

what is measurement in research methodology

Rank the following mobile brand in order of your preference, the most preferred mobile brand should be ranked one, the second most preferred should be ranked two and so on.

rank order questions for questionnaire

To know the descriptive analysis of the ranking scale, watch the video.

Ranking Scale Questionnaire - How to tabulate, analyse and prepare graph using MS Excel.

Interval Scale

It is the next higher level of measurement. It overcomes the limitation of ordinal scale measurement. In the ordinal scale, the magnitude of the difference is unimportant, but here on an interval scale, the magnitude of the difference is important. In the interval scale, the difference between the two variables has a meaningful interpretation. In the interval scale, the difference between variables is equal distance. The distance between any two adjacent attributes is called an  interval , and intervals are always equal.

Examples of Interval Scale data connection using questionnaire.

How likely do you recommend our product to your friends or relatives?

what is measurement in research methodology

Likert scale is a tool to collect interval data, which is developed by Rensis Likert

To know the descriptive analysis of the interval scale , watch the video.

How to tabulate, analyze, and prepare graph from Likert Scale questionnaire data using Ms Excel.

Ratio Scale

Ratio scale is purely quantitative.  Among the four levels of measurement, ratio scale is the most precise.  The score of zero in ratio scale is not arbitrary compared to the other three scales.

This is the unique quality of ratio scale data.  It represents all the characteristics of nominal, ordinal, and interval scales.  Examples of ratio scales are age, wight, height, income, distance etc.

Examples of Interval Scale (Ranking Scale) data connection using questionnaire.

Specify you monthly income :

How many students are there in your institution? :

Number of departments in your organisation :

Thank You !!! Successfully Registered !!!

Recent posts.

  • UGC NET : Research Methodology 003
  • UGC NET – Part I : Research Aptitude 002
  • UGC NET – Part I : Research Aptitude 001
  • Demographic Data: Role and importance in market research.

what is measurement in research methodology

Sign Up For Our Newsletter!

No Spam! Just information about our new publication.

You have Successfully Subscribed!

Measurements in quantitative research: how to select and report on research instruments

Affiliation.

  • 1 Department of Acute and Tertiary Care in the School of Nursing, University of Pittsburgh in Pennsylvania.
  • PMID: 24969252
  • DOI: 10.1188/14.ONF.431-433

Measures exist to numerically represent degrees of attributes. Quantitative research is based on measurement and is conducted in a systematic, controlled manner. These measures enable researchers to perform statistical tests, analyze differences between groups, and determine the effectiveness of treatments. If something is not measurable, it cannot be tested.

Keywords: measurements; quantitative research; reliability; validity.

  • Clinical Nursing Research / methods*
  • Clinical Nursing Research / standards
  • Fatigue / nursing*
  • Neoplasms / nursing*
  • Oncology Nursing*
  • Quality of Life*
  • Reproducibility of Results

Logo for Kwantlen Polytechnic University

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Psychological Measurement

19 Understanding Psychological Measurement

Learning objectives.

  • Define measurement and give several examples of measurement in psychology.
  • Explain what a psychological construct is and give several examples.
  • Distinguish conceptual from operational definitions, give examples of each, and create simple operational definitions.
  • Distinguish the four levels of measurement, give examples of each, and explain why this distinction is important.

What Is Measurement?

Measurement  is the assignment of scores to individuals so that the scores represent some characteristic of the individuals. This very general definition is consistent with the kinds of measurement that everyone is familiar with—for example, weighing oneself by stepping onto a bathroom scale, or checking the internal temperature of a roasting turkey using a meat thermometer. It is also consistent with measurement in the other sciences. In physics, for example, one might measure the potential energy of an object in Earth’s gravitational field by finding its mass and height (which of course requires measuring  those  variables) and then multiplying them together along with the gravitational acceleration of Earth (9.8 m/s2). The result of this procedure is a score that represents the object’s potential energy.

This general definition of measurement is consistent with measurement in psychology too. (Psychological measurement is often referred to as psychometrics .) Imagine, for example, that a cognitive psychologist wants to measure a person’s working memory capacity—their ability to hold in mind and think about several pieces of information all at the same time. To do this, she might use a backward digit span task, in which she reads a list of two digits to the person and asks them to repeat them in reverse order. She then repeats this several times, increasing the length of the list by one digit each time, until the person makes an error. The length of the longest list for which the person responds correctly is the score and represents their working memory capacity. Or imagine a clinical psychologist who is interested in how depressed a person is. He administers the Beck Depression Inventory, which is a 21-item self-report questionnaire in which the person rates the extent to which they have felt sad, lost energy, and experienced other symptoms of depression over the past 2 weeks. The sum of these 21 ratings is the score and represents the person’s current level of depression.

The important point here is that measurement does not require any particular instruments or procedures. What it  does  require is  some  systematic procedure for assigning scores to individuals or objects so that those scores represent the characteristic of interest.

Psychological Constructs

Many variables studied by psychologists are straightforward and simple to measure. These include age, height, weight, and birth order. You can ask people how old they are and be reasonably sure that they know and will tell you. Although people might not know or want to tell you how much they weigh, you can have them step onto a bathroom scale. Other variables studied by psychologists—perhaps the majority—are not so straightforward or simple to measure. We cannot accurately assess people’s level of intelligence by looking at them, and we certainly cannot put their self-esteem on a bathroom scale. These kinds of variables are called  constructs  (pronounced  CON-structs ) and include personality traits (e.g., extraversion), emotional states (e.g., fear), attitudes (e.g., toward taxes), and abilities (e.g., athleticism).

Psychological constructs cannot be observed directly. One reason is that they often represent  tendencies  to think, feel, or act in certain ways. For example, to say that a particular university student is highly extraverted does not necessarily mean that she is behaving in an extraverted way right now. In fact, she might be sitting quietly by herself, reading a book. Instead, it means that she has a general tendency to behave in extraverted ways (e.g., being outgoing, enjoying social interactions) across a variety of situations. Another reason psychological constructs cannot be observed directly is that they often involve internal processes. Fear, for example, involves the activation of certain central and peripheral nervous system structures, along with certain kinds of thoughts, feelings, and behaviors—none of which is necessarily obvious to an outside observer. Notice also that neither extraversion nor fear “reduces to” any particular thought, feeling, act, or physiological structure or process. Instead, each is a kind of summary of a complex set of behaviors and internal processes.

The Big Five

The Big Five is a set of five broad dimensions that capture much of the variation in human personality. Each of the Big Five can even be defined in terms of six more specific constructs called “facets” (Costa & McCrae, 1992) [1] .

Table 4.1 The Big Five Personality Dimensions

The  conceptual definition  of a psychological construct describes the behaviors and internal processes that make up that construct, along with how it relates to other variables. For example, a conceptual definition of neuroticism (another one of the Big Five) would be that it is people’s tendency to experience negative emotions such as anxiety, anger, and sadness across a variety of situations. This definition might also include that it has a strong genetic component, remains fairly stable over time, and is positively correlated with the tendency to experience pain and other physical symptoms.

Students sometimes wonder why, when researchers want to understand a construct like self-esteem or neuroticism, they do not simply look it up in the dictionary. One reason is that many scientific constructs do not have counterparts in everyday language (e.g., working memory capacity). More important, researchers are in the business of developing definitions that are more detailed and precise—and that more accurately describe the way the world is—than the informal definitions in the dictionary. As we will see, they do this by proposing conceptual definitions, testing them empirically, and revising them as necessary. Sometimes they throw them out altogether. This is why the research literature often includes different conceptual definitions of the same construct. In some cases, an older conceptual definition has been replaced by a newer one that fits and works better. In others, researchers are still in the process of deciding which of various conceptual definitions is the best.

Operational Definitions

An  operational definition  is a definition of a variable in terms of precisely how it is to be measured. These measures generally fall into one of three broad categories.  Self-report measures  are those in which participants report on their own thoughts, feelings, and actions, as with the Rosenberg Self-Esteem Scale (Rosenberg, 1965) [2] . Behavioral measures  are those in which some other aspect of participants’ behavior is observed and recorded. This is an extremely broad category that includes the observation of people’s behavior both in highly structured laboratory tasks and in more natural settings. A good example of the former would be measuring working memory capacity using the backward digit span task. A good example of the latter is a famous operational definition of physical aggression from researcher Albert Bandura and his colleagues (Bandura, Ross, & Ross, 1961) [3] . They let each of several children play for 20 minutes in a room that contained a clown-shaped punching bag called a Bobo doll. They filmed each child and counted the number of acts of physical aggression the child committed. These included hitting the doll with a mallet, punching it, and kicking it. Their operational definition, then, was the number of these specifically defined acts that the child committed during the 20-minute period. Finally,  physiological measures  are those that involve recording any of a wide variety of physiological processes, including heart rate and blood pressure, galvanic skin response, hormone levels, and electrical activity and blood flow in the brain.

For any given variable or construct, there will be multiple operational definitions. Stress is a good example. A rough conceptual definition is that stress is an adaptive response to a perceived danger or threat that involves physiological, cognitive, affective, and behavioral components. But researchers have operationally defined it in several ways. The Social Readjustment Rating Scale (Holmes & Rahe, 1967) [4] is a self-report questionnaire on which people identify stressful events that they have experienced in the past year and assigns points for each one depending on its severity. For example, a man who has been divorced (73 points), changed jobs (36 points), and had a change in sleeping habits (16 points) in the past year would have a total score of 125. The Hassles and Uplifts Scale (Delongis, Coyne, Dakof, Folkman & Lazarus, 1982) [5]  is similar but focuses on everyday stressors like misplacing things and being concerned about one’s weight. The Perceived Stress Scale (Cohen, Kamarck, & Mermelstein, 1983) [6] is another self-report measure that focuses on people’s feelings of stress (e.g., “How often have you felt nervous and stressed?”). Researchers have also operationally defined stress in terms of several physiological variables including blood pressure and levels of the stress hormone cortisol.

When psychologists use multiple operational definitions of the same construct—either within a study or across studies—they are using converging operations . The idea is that the various operational definitions are “converging” or coming together on the same construct. When scores based on several different operational definitions are closely related to each other and produce similar patterns of results, this constitutes good evidence that the construct is being measured effectively and that it is useful. The various measures of stress, for example, are all correlated with each other and have all been shown to be correlated with other variables such as immune system functioning (also measured in a variety of ways) (Segerstrom & Miller, 2004) [7] . This is what allows researchers eventually to draw useful general conclusions, such as “stress is negatively correlated with immune system functioning,” as opposed to more specific and less useful ones, such as “people’s scores on the Perceived Stress Scale are negatively correlated with their white blood counts.”

Levels of Measurement

The psychologist S. S. Stevens suggested that scores can be assigned to individuals in a way that communicates more or less quantitative information about the variable of interest (Stevens, 1946) [8] . For example, the officials at a 100-m race could simply rank order the runners as they crossed the finish line (first, second, etc.), or they could time each runner to the nearest tenth of a second using a stopwatch (11.5 s, 12.1 s, etc.). In either case, they would be measuring the runners’ times by systematically assigning scores to represent those times. But while the rank ordering procedure communicates the fact that the second-place runner took longer to finish than the first-place finisher, the stopwatch procedure also communicates  how much  longer the second-place finisher took. Stevens actually suggested four different levels of measurement (which he called “scales of measurement”) that correspond to four types of information that can be communicated by a set of scores, and the statistical procedures that can be used with the information.

The  nominal level  of measurement is used for categorical variables and involves assigning scores that are category labels. Category labels communicate whether any two individuals are the same or different in terms of the variable being measured. For example, if you ask your participants about their marital status, you are engaged in nominal-level measurement. Or if you ask your participants to indicate which of several ethnicities they identify themselves with, you are again engaged in nominal-level measurement. The essential point about nominal scales is that they do not imply any ordering among the responses. For example, when classifying people according to their favorite color, there is no sense in which green is placed “ahead of” blue. Responses are merely categorized. Nominal scales thus embody the lowest level of measurement [9] .

The remaining three levels of measurement are used for quantitative variables. The  ordinal level  of measurement involves assigning scores so that they represent the rank order of the individuals. Ranks communicate not only whether any two individuals are the same or different in terms of the variable being measured but also whether one individual is higher or lower on that variable. For example, a researcher wishing to measure consumers’ satisfaction with their microwave ovens might ask them to specify their feelings as either “very dissatisfied,” “somewhat dissatisfied,” “somewhat satisfied,” or “very satisfied.” The items in this scale are ordered, ranging from least to most satisfied. This is what distinguishes ordinal from nominal scales. Unlike nominal scales, ordinal scales allow comparisons of the degree to which two individuals rate the variable. For example, our satisfaction ordering makes it meaningful to assert that one person is more satisfied than another with their microwave ovens. Such an assertion reflects the first person’s use of a verbal label that comes later in the list than the label chosen by the second person.

On the other hand, ordinal scales fail to capture important information that will be present in the other levels of measurement we examine. In particular, the difference between two levels of an ordinal scale cannot be assumed to be the same as the difference between two other levels (just like you cannot assume that the gap between the runners in first and second place is equal to the gap between the runners in second and third place). In our satisfaction scale, for example, the difference between the responses “very dissatisfied” and “somewhat dissatisfied” is probably not equivalent to the difference between “somewhat dissatisfied” and “somewhat satisfied.” Nothing in our measurement procedure allows us to determine whether the two differences reflect the same difference in psychological satisfaction. Statisticians express this point by saying that the differences between adjacent scale values do not necessarily represent equal intervals on the underlying scale giving rise to the measurements. (In our case, the underlying scale is the true feeling of satisfaction, which we are trying to measure.)

The  interval level  of measurement involves assigning scores using numerical scales in which intervals have the same interpretation throughout. As an example, consider either the Fahrenheit or Celsius temperature scales. The difference between 30 degrees and 40 degrees represents the same temperature difference as the difference between 80 degrees and 90 degrees. This is because each 10-degree interval has the same physical meaning (in terms of the kinetic energy of molecules).

Interval scales are not perfect, however. In particular, they do not have a true zero point even if one of the scaled values happens to carry the name “zero.” The Fahrenheit scale illustrates the issue. Zero degrees Fahrenheit does not represent the complete absence of temperature (the absence of any molecular kinetic energy). In reality, the label “zero” is applied to its temperature for quite accidental reasons connected to the history of temperature measurement. Since an interval scale has no true zero point, it does not make sense to compute ratios of temperatures. For example, there is no sense in which the ratio of 40 to 20 degrees Fahrenheit is the same as the ratio of 100 to 50 degrees; no interesting physical property is preserved across the two ratios. After all, if the “zero” label were applied at the temperature that Fahrenheit happens to label as 10 degrees, the two ratios would instead be 30 to 10 and 90 to 40, no longer the same! For this reason, it does not make sense to say that 80 degrees is “twice as hot” as 40 degrees. Such a claim would depend on an arbitrary decision about where to “start” the temperature scale, namely, what temperature to call zero (whereas the claim is intended to make a more fundamental assertion about the underlying physical reality).

In psychology, the intelligence quotient (IQ) is often considered to be measured at the interval level. While it is technically possible to receive a score of 0 on an IQ test, such a score would not indicate the complete absence of IQ. Moreover, a person with an IQ score of 140 does not have twice the IQ of a person with a score of 70. However, the difference between IQ scores of 80 and 100 is the same as the difference between IQ scores of 120 and 140.

Finally, the  ratio level  of measurement involves assigning scores in such a way that there is a true zero point that represents the complete absence of the quantity. Height measured in meters and weight measured in kilograms are good examples. So are counts of discrete objects or events such as the number of siblings one has or the number of questions a student answers correctly on an exam. You can think of a ratio scale as the three earlier scales rolled up in one. Like a nominal scale, it provides a name or category for each object (the numbers serve as labels). Like an ordinal scale, the objects are ordered (in terms of the ordering of the numbers). Like an interval scale, the same difference at two places on the scale has the same meaning. However, in addition, the same ratio at two places on the scale also carries the same meaning (see Table 4.1).

The Fahrenheit scale for temperature has an arbitrary zero point and is therefore not a ratio scale. However, zero on the Kelvin scale is absolute zero. This makes the Kelvin scale a ratio scale. For example, if one temperature is twice as high as another as measured on the Kelvin scale, then it has twice the kinetic energy of the other temperature.

Another example of a ratio scale is the amount of money you have in your pocket right now (25 cents, 50 cents, etc.). Money is measured on a ratio scale because, in addition to having the properties of an interval scale, it has a true zero point: if you have zero money, this actually implies the absence of money. Since money has a true zero point, it makes sense to say that someone with 50 cents has twice as much money as someone with 25 cents.

Stevens’s levels of measurement are important for at least two reasons. First, they emphasize the generality of the concept of measurement. Although people do not normally think of categorizing or ranking individuals as measurement, in fact, they are as long as they are done so that they represent some characteristic of the individuals. Second, the levels of measurement can serve as a rough guide to the statistical procedures that can be used with the data and the conclusions that can be drawn from them. With nominal-level measurement, for example, the only available measure of central tendency is the mode. With ordinal-level measurement, the median or mode can be used as indicators of central tendency. Interval and ratio-level measurement are typically considered the most desirable because they permit for any indicators of central tendency to be computed (i.e., mean, median, or mode). Also, ratio-level measurement is the only level that allows meaningful statements about ratios of scores. Once again, one cannot say that someone with an IQ of 140 is twice as intelligent as someone with an IQ of 70 because IQ is measured at the interval level, but one can say that someone with six siblings has twice as many as someone with three because number of siblings is measured at the ratio level.

  • Costa, P. T., Jr., & McCrae, R. R. (1992). Normal personality assessment in clinical practice: The NEO Personality Inventory. Psychological Assessment, 4 , 5–13. ↵
  • Rosenberg, M. (1965). Society and the adolescent self-image. Princeton, NJ: Princeton University Press ↵
  • Bandura, A., Ross, D., & Ross, S. A. (1961). Transmission of aggression through imitation of aggressive models. Journal of Abnormal and Social Psychology, 63 , 575–582. ↵
  • Holmes, T. H., & Rahe, R. H. (1967). The Social Readjustment Rating Scale. Journal of Psychosomatic Research, 11 (2), 213-218. ↵
  • Delongis, A., Coyne, J. C., Dakof, G., Folkman, S., & Lazarus, R. S. (1982). Relationships of daily hassles, uplifts, and major life events to health status. Health Psychology, 1 (2), 119-136. ↵
  • Cohen, S., Kamarck, T., & Mermelstein, R. (1983). A global measure of perceived stress. Journal of Health and Social Behavior, 24, 386-396. ↵
  • Segerstrom, S. E., & Miller, G. E. (2004). Psychological stress and the human immune system: A meta-analytic study of 30 years of inquiry. Psychological Bulletin, 130 , 601–630. ↵
  • Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103 , 677–680. ↵
  • Levels of Measurement. Retrieved from http://wikieducator.org/Introduction_to_Research_Methods_In_Psychology/Theories_and_Measurement/Levels_of_Measurement ↵

Is the assignment of scores to individuals so that the scores represent some characteristic of the individuals.

A subfield of psychology concerned with the theories and techniques of psychological measurement.

Psychological variables that represent an individual's mental state or experience, often not directly observable, such as personality traits, emotional states, attitudes, and abilities.

Describes the behaviors and internal processes that make up a psychological construct, along with how it relates to other variables.

A definition of the variable in terms of precisely how it is to be measured.

Measures in which participants report on their own thoughts, feelings, and actions.

Measures in which some other aspect of participants’ behavior is observed and recorded.

Measures that involve recording any of a wide variety of physiological processes, including heart rate and blood pressure, galvanic skin response, hormone levels, and electrical activity and blood flow in the brain.

When psychologists use multiple operational definitions of the same construct—either within a study or across studies.

Four categories, or scales, of measurement (i.e., nominal, ordinal, interval, and ratio) that specify the types of information that a set of scores can have, and the types of statistical procedures that can be used with the scores.

A measurement used for categorical variables and involves assigning scores that are category labels.

A measurement that involves assigning scores so that they represent the rank order of the individuals.

A measurement that involves assigning scores using numerical scales in which intervals have the same interpretation throughout.

A measurement that involves assigning scores in such a way that there is a true zero point that represents the complete absence of the quantity.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.16(4); 2018 Apr

Logo of plosbiol

How measurement science can improve confidence in research results

Anne l. plant.

1 Material Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, Maryland, United States of America

Chandler A. Becker

Robert j. hanisch, ronald f. boisvert.

2 Information Technology Laboratory, National Institute of Standards and Technology, Gaithersburg, Maryland, United States of America

Antonio M. Possolo

John t. elliott.

The current push for rigor and reproducibility is driven by a desire for confidence in research results. Here, we suggest a framework for a systematic process, based on consensus principles of measurement science, to guide researchers and reviewers in assessing, documenting, and mitigating the sources of uncertainty in a study. All study results have associated ambiguities that are not always clarified by simply establishing reproducibility. By explicitly considering sources of uncertainty, noting aspects of the experimental system that are difficult to characterize quantitatively, and proposing alternative interpretations, the researcher provides information that enhances comparability and reproducibility.

Indicators of confidence in research results

While reports about the difficulty of reproducing published biomedical research results in the labs of pharmaceutical companies [ 1 , 2 ] have in large part triggered the current “reproducibility crisis,” reproducibility has also been cited as a concern in computation [ 3 ], forensics [ 4 ], epidemiology [ 5 ], psychology [ 6 ], and other fields, including chemistry, biology, physics and engineering, medicine, and earth and environmental sciences [ 7 ].

While “reproducibility” is the term most often used to describe the issue, it has been frequently pointed out that reproducibility does not guarantee that a result of scientific inquiry tracks the truth [ 8 – 11 ]. It has been suggested that, instead, there is a need for “a fundamental embrace of good scientific methodology” [ 12 ], and the term “metascience” has been proposed to refer to the idea that rigorous methods can be used to examine the reliability of results [ 13 ].

These perspectives suggest that it would be worthwhile to consider how the concepts of measurement science—i.e., metrology—can provide useful guidance that would enable researchers to assess and achieve rigor of a research study [ 14 ]. The goal of measurement science is comparability, which enables evaluation of the results from one time and place relative to results from another time and place; this is ultimately the goal of establishing rigor and reproducibility. The purpose of this manuscript is to provide a practical connection between the field of metrology and the desire for rigor and reproducibility in scientific studies.

In the field of metrology, a measurement consists of two components: a value determined for the measurand and the uncertainty in that value [ 15 ]. The uncertainty around a value is an essential component of a measurement. In the simplest case, the uncertainty is determined by the variability in replicate measurements, but for complicated measurements, it is estimated by the combination of the uncertainties at every step in the process. The concepts that support quantifying measurement uncertainty arise from international conventions that have been agreed to through consensus by scientists in many fields of study over the past 150 years and continue to be developed. These conventions are developed and adopted by the National Metrology Institutes around the world (including the National Institute of Standards and Technology [NIST] in the United States) and international standards organizations such as the International Bureau of Weights and Measures (Bureau International des Poids et Mesures, BIPM), the International Electrotechnical Commission (IEC), the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC), the International Organization for Standardization (ISO), the International Union of Pure and Applied Physics (IUPAP), the International Laboratory Accreditation Cooperation (ILAC), and others. These efforts helped to advance the concepts of modern physics by providing the basis on which comparison of data was made possible [ 14 ]. Thus, it seems appropriate to examine these concepts today to inform our current concerns about rigor and reproducibility.

One of the consensus documents developed by measurement scientists is the Guide to Expression of Uncertainty in Measurement [ 16 ], commonly known as the GUM. This document describes the types of uncertainty (e.g., Type A, those that are evaluated by statistical methods; and Type B, those that are evaluated by other means) and methods for evaluating and expressing uncertainties. The GUM describes a rigorous approach to quantifying measurement uncertainty that is more readily applied to well-defined physical quantities with discrete values and uncertainties (such as the measurements of amount of a substance, like lead in water) than to measurements that involve many parameters (such as complex experimental studies involving cells and animals). Calculating uncertainties in such complex measurement systems is a topic of ongoing research. But even if uncertainties are not rigorously quantified, the concepts of measurement uncertainty provide a systematic thought process about to how to critically evaluate comparability between results produced in different laboratories.

The GUM identifies examples of sources of uncertainty. These include an incomplete definition of what is being measured (i.e., the measurand); the possibility of nonrepresentative or incomplete sampling, in which the samples measured may not represent all of what was intended to be measured; the approximations and assumptions that are incorporated in the measurement method and procedure; and inadequate knowledge of the effects of environmental conditions on the measurement. In Table 1 , we have grouped the sources of uncertainty identified in the GUM that are common to many scientific studies, and we have indicated measurement science approaches for characterizing and mitigating uncertainty.

Abbreviations : GUM, Guide to Expression of Uncertainty in Measurement ; VIM, International Vocabulary of Basic and General Terms in Metrology

The GUM also provides definitions of many terms such as “repeatability” (which is defined as the closeness of the agreement between the results of successive measurements of the same measurand carried out under the same conditions of measurement) and “reproducibility” (which is defined as the closeness of the agreement between the results of measurements of the same measurand carried out under different conditions of measurement). A complete list of consensus definitions of measurement-related terms can be found in the International Vocabulary of Basic and General Terms in Metrology (VIM) [ 18 ]. A recent publication demonstrates the adoption of these definitions to harmonize practices across the geophysics community [ 19 ].

What does Table 1 add to existing efforts?

There have been many efforts to encourage more reliable research results, and many fields have proposed or instituted conventions, checklists, requirements, and reporting standards that are applicable to their specific disciplines. Some of these include the Grades of Recommendation, Assessment, Development and Evaluation (GRADE) approach for assessing clinical evidence [ 20 ], the minimum information activities that have a long history in the biosciences (e.g., Minimum Information about a Microarray Experiment [MIAME]) [ 21 ], checklists developed by scientific journals requiring specific criteria to be reported [ 22 ], a NIST system for checking thermodynamic data prior to publication [ 23 ], and many more. These efforts are not intended to be comprehensive determinations of potential sources of uncertainty in measurement. But interest in measurement science principles is increasing. For example, the Minimum Information About a Cellular Assay (MIACA) activity [ 24 ], which was last updated in 2013, encourages reporting the experimental details of cellular assay projects. The more recent Minimum Information About T cell Assays (MIATA), [ 25 , 26 ] which is focused on identifying and encouraging the reporting of variables of particular importance to the outcome of T cell assays, is more comprehensive. MIATA guidelines go beyond descriptions of activities and reagents to include the reporting of quality control activities such as providing information regarding the strategies for data analysis and reporting any effort to pretest medium or serum for assay performance. The most current National Institutes of Health (NIH) instructions for grant applications [ 27 ] speak to many of the concepts of metrology: stating the scientific premise and considering the strengths and weaknesses of prior research; applying scientific method to experimental design, methodology, analysis, and interpretation; considering biological variables such as sex; and authenticating biological and chemical resources that may be sources of variability. Thus, it seems timely to suggest a comprehensive framework that can help to guide identification of the many other potential sources of uncertainty. The conceptual framework in Table 1 can enhance existing guidelines by helping scientists identify potential sources of uncertainty that might not have been considered in existing checklists and to provide some strategies for reducing uncertainty. Table 1 is designed to help guide researchers’ critical thinking about the various aspects of their research in an organized way that encourages them to document the data they can, and often do, collect that provide confidence in the results.

The inclusion of supporting evidence helps end users of research results—such as decision-makers, commercial developers, and other researchers—know how best to use and follow up on the results. Few research studies will address all aspects indicated in Table 1 . But by explicitly acknowledging what is known—or, more importantly, what isn’t known—about the various components of a research effort, it is easier to see the strengths and limitations of a study and to assess, for example, whether the study is more preliminary in nature or if the results are highly reliable. The Data Readiness Level is a concept that has been put forward by the nanotechnology community and is an example of this kind of approach, [ 28 ] and others have suggested the need for this level of reporting [ 11 ].

What are the hurdles that keep ideas such as these from being implemented?

The sociological issues that accompany the “reproducibility crisis” have been discussed in many venues and are beyond the scope of this discussion. Instead, we focus on the principles and practices of measurement science since we find that researchers, particularly in rapidly advancing fields, are sometimes confused about how to apply these principles of the scientific method to achieve “rigor and reproducibility.”

A hurdle to implementation of these concepts is the need for tools and technologies that can reduce the challenges for experimentalists who want to address the elements in Table 1 . There has not been sufficient investment, perhaps, in technologies that could allow us to better characterize the components of our experimental systems, such as antibody reagents, cell lines, or image analysis pipelines. As a scientific community, we have not prioritized investments in software to facilitate collecting information on complex experimental protocols. While there is great interest in data mining, there is still a lack of progress in the development of natural language and other approaches for achieving harmonized vocabularies that would make it easier to compare and share experimental metadata and protocols. Efforts associated with capturing the details of complicated experimental protocols are being undertaken. PLOS has entered into a collaboration with Protocols.io [ 29 ] to facilitate reporting, sharing, and improving protocols. Another effort, ProtocolNavigator [ 30 ], enables collection of highly detailed experimental information and storage of provenance information; there are also supporting links to stored data and explanatory videos [ 31 ]. Challenges associated with data and digital resources are being considered by the Research Data Alliance (RDA) [ 32 ]. The RDA was established in 2013 to foster the sharing of research data but recognized that effective sharing requires standards and best practices and is pursuing technical developments in data discovery, semantics, ontologies, data citation and versioning, data types, and persistent identifiers. Also, with the current emphasis on open data [ 33 ] and large-scale data sharing [ 32 ], it would be helpful to have a means of evaluating the aspects of the research that establish confidence of the results being shared, especially by those who are using data outside of their area of technical expertise. In addition, increased support for the science that underpins the technologies and methods that help to establish confidence in data will contribute to improving the reusability of published research results.

Conclusions

The consideration by researchers of a systematic approach to identifying sources of uncertainty will enhance comparability of results between laboratories. Because no single scientific observation reveals the absolute “truth,” the job of the researcher and the reviewer is to determine how ambiguities have been reduced and what ambiguities still exist. By addressing and characterizing the components of the study as potential sources of uncertainty, the researcher can provide the supporting evidence that helps to define the characteristics of the data, analysis, and tests of the assumptions that were made; such evidence provides confidence in the results and helps inform the reader about how to use the information. Unfortunately, even when studies include these activities, they are rarely reported in an explicit and systematic way that provides maximum value to the reader.

A framework such as the one outlined in Table 1 is applicable to many areas of scientific research. The ideas presented here are not radical or new but are worthy of reconsideration because of the current concern about comparability of research results. We provide this information in the spirit of stimulating discussion within and among the scientific disciplines. More explicit use and documentation of the concepts discussed above will improve confidence in published research results. Applying these concepts will require commitment and critical thinking on the part of individuals, as well as a continuation of the tradition of cooperative effort within and across scientific communities. The end result will be worth the additional effort.

Abbreviations

Funding statement.

The author(s) received no specific funding for this work.

Provenance: Not commissioned; externally peer reviewed.

Book cover

Handbook of Behavior Therapy in Education pp 37–65 Cite as

Research Methodology and Measurement

  • Frank M. Gresham 3 &
  • Michael P. Carey 3  

392 Accesses

3 Citations

Scientific research refers to controlled, systematic, empirical, and critical investigation of natural phenomena that is guided by hypotheses and theory about supposed relations between such phenomena (Kerlinger, 1986). The method of science represents a method of knowing that is unique in that it possesses a self-correcting feature that verifies of disconfirms formally stated predictions (i.e., hypotheses) about natural phenomena. Cohen and Nagel (1934) identified three additional methods of knowing that are diametrically opposed to science: (a) the method of tenacity, (b) the method of authority, and (c) the method of intuition.

  • Latent Trait
  • Behavioral Assessment
  • Error Score
  • Apply Behavior Analysis

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Unable to display preview.  Download preview PDF.

Achenbach, T., & Edelbrock, C. (1983). Manual for the Child Behavior Profile . Burlington, VT: University of Vermont.

Google Scholar  

Achinstein, P. (1968). Concepts of science: A philosophical analysis . Baltimore, MD: The John Hopkins Press.

Albee, G. (1970). The uncertain future of clinical psychology. American Psychologist, 25 , 1071–1080.

Article   PubMed   Google Scholar  

Arter, J. A., Jenkins, J. R. (1977). Examining the benefits and prevalence of modality considerations in special education. Journal of Special Education, 11 , 281–298.

Article   Google Scholar  

Atkinson, R. C., Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In K. W. Spence, J. T. Spence (Eds.), The psychology of learning and motivation (2nd ed.). New York: Academic Press.

Barlow, D. H. (1980). Behavior therapy: The next decade. Behavior Therapy, 11 , 315–328.

Barlow, D., Hersen, M. (1984). Single case experimental designs (2nd ed.). New York: Pergamon Press.

Barlow, D., Hayes, S., Nelson, R. (1984). The scientist practitioner . New York: Pergamon Press.

Becker, W. C., & Carnine, D. W. (1980). Direct instruction: An effective approach to educational intervention with the disadvantaged and low performers. In B. B. Lahey, A. E. & Kazdin (Eds.), Advances in clinical child psychology (pp. 429–473). New York: Academic Press.

Bock, R. D. (1972). Estimating item parameters and latent ability when responses are scored in two or more nominal categories. Psychometrika, 37 , 29–51.

Brown, F. G. (1983). Principles of educational and psychological measurement (3rd ed.). New York: Holt, Rinehart, Winston.

Burns, L. (1980). Indirect measurement and behavioral assessment: A case for social behaviorism psycho-metrics. Behavioral Assessment, 2 , 197–206.

Campbell, D. T., Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56 , 81–105.

Cohen, J., Cohen, P. (1983). Applied multiple regression/correlation analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.

Cohen, M., Nagel, E. (1934). An introduction to logic and scientific method . New York: Harcourt.

Cone, J. D. (1977). The relevance of reliability and validity for behavioral assessment. Behavior Therapy, 8 ,411–426.

Cone, J. D. (1978). The behavioral assessment grid (BAG): A conceptual framework and taxonomy. Behavior Therapy, 9 , 882–888.

Cone, J. (1979). Confounded comparisons in triple response mode assessment. Behavioral Assessment, 1 , 85–95.

Cone, J. (1981). Psychometric considerations. In M. Hersen, A. Bellack (Eds.), Behavioral assessment: A practical handbook (38–70). New York: Pergamon Press.

Crocker, L., Algina, J. (1986). Introduction to classical and modern test theory . New York: Holt, Rinehart, Winston.

Cronbach, L. J., Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52 , 281–302.

Cronbach, L. J., Snow, R. (1977). Aptitude and instructional methods: A handbook for research on interactions . New York: Irvington.

Cronbach, L. J., Gleser, G. C., Nanda, H., Rajaratnam, N. (1972). The dependability of behavioral measurements . New York: Wiley.

Garfield, L. L., Kurtz, R. M. (1976). Clinical psychologists in the 1970s. American Psychologist, 31 , 1–9.

Ghiselli, E., Campbell, J., Zedeck, S. (1981). Measurement theory for the behavioral sciences . San Francisco: W. H. Freeman.

Gresham, F. M. (1982). A model for the behavioral assessment of behavior disorders in children: Measurement considerations and practical application. Journal of School Psychology, 20 , 131–143.

Guilford, J. P. (1959). The three faces of intellect. American Psychologist, 14 , 469.

Guion, R. M., Ironson, G. H. (1983). Latent trait theory for organizational research. Organizational Behavior and Human Performance, 31 , 54–87.

Harris, A. (1980). Response class: A Guttman scale analysis. Journal of Abnormal Child Psychology, 8 , 213–220.

Hambleton, R. K., Cook, L. L. (1977). Latent trait models and their use in the analysis of educational test data. Journal of Educational Measurement, 14 , 75–96.

Hayes, S. C. (1981). Single case experimental design and empirical clinical practice. Journal of Consulting and Clinical Psychology, 49 , 193–211.

Hays, W. (1985). Statistics (3rd ed). New York: Holt.

Hersen, M., Bellack, A. (Eds.). (1981). Behavioral assessment: A practical handbook (2nd ed.). New York: Pergamon Press.

Jackson, D. N. (1969). Multimethod factor analysis in the evaluation of convergent and discriminant validity. Psychological Bulletin, 29 , 259–271.

Jones, R. R., Reid, J. B., Patterson, G. R. (1975). Naturalistic observation in clinical assessment. In P. McReynolds (Ed.), Advances in psychological assessment (Vol. 3). San Francisco: Jossey-Bass.

Johnston, J. M., Pennypacker, H. S. (1980). Strategies and tactics of human behavioral research . Hillsdale, NJ: Erlbaum.

Jorkeskog, K. G., Sorbom, D. (1983). LISREL VI: Analysis of linear structural relationships by maximum likelihood and least squares methods (2nd ed.). Chicago: Natural Education Resources.

Kaufman, A., Kaufman, N. (1983). K-ABC: Kaufman Assessment Battery for Children: Interpretive manual . Circle Pines, MN: American Guidance Service.

Kazdin, A. E. (1977). Assessing the clinical or applied importance of behavior change through social validation. Behavior Modification, 1 , 427–452.

Kazdin, A. E. (1979). Situational-specificity: The two-edged sword of behavioral assessment. Behavioral Assessment, 1 , 57–75.

Kelly, E. L., Goldberg, L. R., Fiske, D. W., Kilkowski, J. (1978). Twenty-five years later: A follow-up study of the graduate students in clinical psychology assessed in the VA Selection Research Project. American Psychologist, 33 , 746–755.

Kerlinger, F. (1986). Foundations of behavioral research (3rd ed.). New York: Holt, Rinehart, Winston.

Kratochwill, T., Mace, F. C., Mott, S. (1985). Research methods from applied behavior analysis. In C. R. Reynolds, V. L. Willson (Eds.), Methodological and statistical advances in the study of individual differences (pp. 335–392). New York: Plenum Press.

Lichstein, K. L., Wahler, R. G. (1976). The ecological assessment of an autistic child. Journal of Abnormal Child Psychology, 4 , 31–54.

Linehan, M. M. (1980). Content validity: Its relevance to behavioral assessment. Behavioral Assessment, 2 , 147–159.

Lord, F. M. (1952). The relationship of the reliability of multiple choice items to the distribution of item difficulties. Psychometrika, 18 , 181–194.

Lord, F. M., Novick, M. R. (1968). Statistical theories of mental test scores . Reading, MA: Addison-Wesley.

Martens, B. K., Keller, H. R. (1987). Training school psychologists in the scientific tradition. School Psychology Review, 16 , 329–337.

Mash, E. J., Terdal, L. G. (Eds.) (1981). Behavioral assessment of childhood disorders . New York: Guilford Press.

Meitner, L., Frisch, O. R. (1939). Disintegration of uranium by neutrons: A new type of nuclear reaction. Nature, 143 , 239.

Mercer, J. (1979). In defense of racially and culturally non-discriminatory assessment. School Psychology Digest, 8 , 89–115.

Mischel, W. (1968). Personality and assessment . New York: Wiley.

Nelson, R. O. (1983). Behavioral assessment: Past, present, and future. Behavioral Assessment, 5 , 195–206.

Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill.

Patterson, G. R. (1986). Performance models for antisocial boys. American Psychologist, 41 , 432–444.

Patterson, G. R., Bank, L. (1986). Bootstrapping your way in the nomological thicket. Behavioral Assessment, 8 , 49–73.

Peterson, D. R. (1976). Need for the Doctor of Psychology degree in professional psychology. American Psychologist, 31 , 792–798.

Reynolds, C. R. (1982). The problem of bias in psychological assessment. In C. Reynolds, T. Gutkin (Eds.), Handbook of school psychology (pp. 178–208). New York: Wiley.

Samejima, F. (1973). Homogenous case of the continuous response model. Psychometrika, 38 , 203–219.

Schmitt, N., Coyle, B. W., Saari, B. B. (1977). A review and critique of analyses of multitrait-multimethod matrices. Multivariate Behavioral Research, 12 , 447–478.

Skinner, B. F. (1974). About behaviorism . New York: Alfred A. Knopf.

Spearman, C. (1904). The proof and measurement of association between two things. American Journal of Psychology, 15 , 72–101.

Staats, A. (1981). Paradigmatic behaviorism, unified theory construction methods, and the Zeitgeist of separtism. American Psychologist, 36 , 239–256.

Stokes, T., Baer, D. (1977). An implicit technology of generalization. Journal of Applied Behavior Analysis, 10 , 349–367.

Strain, P. S., Ezzell, D. (1978). The sequence and distribution of behavioral disordered adolescents’ disruptive/inappropriate behaviors: An observational study in a residential setting. Behavior Modification, 2 ,403–425.

Tabachnick, B. G., Fidell, L. S. (1983). Using multivariate statistics . New York:Harper & Row.

Tatusoka, M. M. (1971). Multivariate analysis: Techniques for educational and psychological research . New York: Wiley.

Tryon, R. C. (1957). Reliability and behavior domain validity: Reformulation and historical critique. Psychological Bulletin, 54 , 229–249.

Voeltz, L. M., Evans, I. M. (1982). The assessment of behavioral interrelationships in child behavior therapy. Behavioral Assessment, 4 , 131–165.

Wahler, R. G. (1975). Some structural aspects of deviant child behavior. Journal of Applied Behavior Analysis, 8 , 27–42.

Walker, H. M., Hops, H. (1976). Use of normative peer data as a standard for evaluating classroom treatment effects. Journal of Applied Behavior Analysis, 9 , 159–168.

Wang, M., Stanley, J. (1970). Differential weighting: A review of methods and empirical studies. Review of Educational Research, 40 , 663–705.

Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11 ,203–214.

Wolfe, V. V., Cone, J. D., Wolfe, D. A. (1986). Social and solipsistic observer training: Effects on agreement with a criterion. Journal of Psychopathology and Behavioral Assessment, 8 , 211–226.

Yeaton, W. H., Sechrest, L. (1981). Critical dimensions in the choice and maintenance of successful treatments: Strength, integrity, and effectiveness. Journal of Consulting and Clinical Psychology, 49 , 156–167.

Download references

Author information

Authors and affiliations.

Department of Psychology, Louisiana State University, Baton Rouge, Louisiana, 70803, USA

Frank M. Gresham & Michael P. Carey

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

Department of Psychology, Louisiana State University, Baton Rouge, Louisiana, USA

Joseph C. Witt  & Frank M. Gresham  & 

Department of Educational Psychology, University of Wisconsin-Madison, Madison, Wisconsin, USA

Stephen N. Elliot

Rights and permissions

Reprints and permissions

Copyright information

© 1988 Plenum Press, New York

About this chapter

Cite this chapter.

Gresham, F.M., Carey, M.P. (1988). Research Methodology and Measurement. In: Witt, J.C., Elliot, S.N., Gresham, F.M. (eds) Handbook of Behavior Therapy in Education. Springer, Boston, MA. https://doi.org/10.1007/978-1-4613-0905-5_2

Download citation

DOI : https://doi.org/10.1007/978-1-4613-0905-5_2

Publisher Name : Springer, Boston, MA

Print ISBN : 978-1-4612-8238-9

Online ISBN : 978-1-4613-0905-5

eBook Packages : Springer Book Archive

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Understanding qualitative measurement: The what, why, and how

Last updated

30 January 2024

Reviewed by

You’ll need to collect data to determine the success of any project, from product launches to employee culture initiatives. How that data is collected is just as important as what it reveals.

There are many ways to gather and analyze data, from in-person interviews to emailed surveys. Qualitative research focuses on telling a story with the information collected, while quantitative research involves collecting, analyzing, and presenting hard datasets.

Data gathered through qualitative measurement describes traits or characteristics. You can collect it in different ways, including interviews and observation, and it can be in the form of descriptive words.

While gathering and analyzing data through qualitative measurement can be challenging, especially if you’re working with limited resources or a smaller team, the insights you get at the end of the project are often well worth the effort.

  • What is qualitative measurement?

Qualitative measures can be particularly helpful in understanding how a phenomenon or action affects individuals and groups.

  • Why is qualitative data important?

Through data, you can understand how to better serve your customers and employees and anticipate shifts in your business.

The data will provide a deeper understanding of your customers, empowering you to make decisions that positively benefit your company in the long run. Qualitative data helps you see patterns and trends so you can make actionable changes. It can also answer questions posed by your project so you can provide company stakeholders with helpful information and insights.

  • How to collect qualitative data

Your ideal method for collecting qualitative data will depend on the resources you have at your disposal, the size of your team, and your project’s timeline.

You might select one method or a mixture of several. For instance, you could opt to send out surveys following a focus group session to receive additional feedback on one or two specific areas of interest.

Analyze your available resources and discuss options with project stakeholders before committing to one particular plan.

The following are some examples of the methods you could use:

Individual interviews

In-depth interviews are one of the most popular methods of collecting qualitative data. They are usually conducted in person, but you could also use video software.

During interviews, a researcher asks the person questions, logging their answers as they go.

Focus groups

Focus groups are a powerful way to observe and document a group of people, making them a common method for collecting qualitative data. They provide researchers with a direct way to interact with participants, listening to them while they share their insights and experiences and recording responses without the interference of software or third-party systems.

However, while focus groups and interviews are two of the most popular methods, they might not be right for every situation or company.

Direct observation

Direct observation allows researchers to see participants in their natural setting, offering an intriguing “real-life” angle to data collection. This method can provide rich, detailed information about the individuals or groups you are studying.

You can conduct surveys in person or online through web software or email. They can also be as detailed or general as your project requires. To get the most information from your surveys, use open-ended questions that encourage respondents to share their thoughts and opinions on the subject.

Diaries and journals

Product launches or employee experience initiatives are two examples of projects that could benefit from diaries and journals as a form of qualitative data gathering.

Diaries and journals enable participants to record their thoughts and feelings on a particular topic. By later examining the diary entries, project managers and stakeholders can better understand their reactions and opinions on the project and the questions asked.

  • Examples of qualitative data

Qualitative data is non-numeric information. It’s descriptive, often including adjectives to paint a picture of a situation or object. Qualitative data can be used to describe a person or place, as you can see in the examples below:

The employee prefers black coffee to sweet beverages.

The cat is black and fluffy.

The brown leather couch is worn and faded.

There are many ways to collate qualitative data, but remember to use appropriate language when communicating it to other project stakeholders. Qualitative data isn’t flowery, but neither does it shy away from descriptors to comprehensively paint a picture.

  • How to measure qualitative data

To measure qualitative data, define a clear project scope ahead of time. Know what questions you want answered and what people you need to speak to to make that happen. While not every result can be tallied, by understanding the questions and project scope well in advance, you’ll be better prepared to analyze what you’re querying.

Define the method you wish to use for your project. Whether you opt for surveys, focus groups, or a mixture of methods, employ the approach that will yield the most valuable data.

Work within your means and be realistic about the resources you can dedicate to data collection. For example, if you only have one or two employees to dedicate to the project, don’t commit to multiple focus group meetings with large groups of participants, as it might not be feasible.

  • What’s the difference between qualitative and quantitative measurements?

Qualitative measurements are descriptive. You can’t measure them with a ruler, scale, or other numeric value, nor can you express them with a numeric value.

In contrast, quantitative measurements are numeric in nature and can be counted.

  • When to use qualitative vs. quantitative measurements

Both qualitative and quantitative measurements can be valuable. Which to use greatly depends on the nature of your project.

If you’re looking to confirm a theory, such as determining which variety of body butter was sold most during a specific month, quantitative measurements will likely give you the answers you need.

To learn more about concepts and experiences, such as which advertising campaign your target customers prefer, opt for qualitative measurement.

You don’t have to commit to one or the other exclusively. Many businesses use a mixed-method approach to research, combining elements of both quantitative and qualitative measurements. Know the questions you want to answer and proceed accordingly with what makes the most sense for your goals.

  • What are the best ways to communicate qualitative data?

Communicating the qualitative data you’ve gathered can be tricky. The information is subjective, and many project stakeholders or other involved parties may have an easier time understanding and reacting to numeric data.

To effectively communicate qualitative data, you’ll need to create a compelling storyline that offers context and relevant details.

It can also help to describe the data collection method you used. This not only helps set the stage for your story but gives those listening insight into research methodologies they may be unfamiliar with.

Finally, allow plenty of time for questions. Regardless of whether you’re speaking to your company’s CEO or a fellow project manager, you should be prepared to respond to questions with additional, relevant information.

How can qualitative measurement be expressed through data?

Qualitative data is non-numeric. It is most often expressed through descriptions since it is surveyed or observed rather than counted.

  • Challenges associated with qualitative measurement

Any in-depth study or research project requires a time commitment. Depending on the research method you employ, other resources might be required. For instance, you might need to compensate the participants of a focus group in some way.

The time and resources required to undertake qualitative measurement could make it prohibitive for many companies, especially small ones with only a few employees. Outsourcing can also be expensive.

Conducting a cost–benefit analysis could help you decide if qualitative measurement is a worthwhile undertaking or one that should be delayed as you plan and prepare.

Get started today

Go from raw data to valuable insights with a flexible research platform

Editor’s picks

Last updated: 21 December 2023

Last updated: 16 December 2023

Last updated: 6 October 2023

Last updated: 5 March 2024

Last updated: 25 November 2023

Last updated: 15 February 2024

Last updated: 11 March 2024

Last updated: 12 December 2023

Last updated: 6 March 2024

Last updated: 10 April 2023

Last updated: 20 December 2023

Latest articles

Related topics, log in or sign up.

Get started for free

What is Research Methodology? Definition, Types, and Examples

what is measurement in research methodology

Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of the research. Several aspects must be considered before selecting an appropriate research methodology, such as research limitations and ethical concerns that may affect your research.

The research methodology section in a scientific paper describes the different methodological choices made, such as the data collection and analysis methods, and why these choices were selected. The reasons should explain why the methods chosen are the most appropriate to answer the research question. A good research methodology also helps ensure the reliability and validity of the research findings. There are three types of research methodology—quantitative, qualitative, and mixed-method, which can be chosen based on the research objectives.

What is research methodology ?

A research methodology describes the techniques and procedures used to identify and analyze information regarding a specific research topic. It is a process by which researchers design their study so that they can achieve their objectives using the selected research instruments. It includes all the important aspects of research, including research design, data collection methods, data analysis methods, and the overall framework within which the research is conducted. While these points can help you understand what is research methodology, you also need to know why it is important to pick the right methodology.

Why is research methodology important?

Having a good research methodology in place has the following advantages: 3

  • Helps other researchers who may want to replicate your research; the explanations will be of benefit to them.
  • You can easily answer any questions about your research if they arise at a later stage.
  • A research methodology provides a framework and guidelines for researchers to clearly define research questions, hypotheses, and objectives.
  • It helps researchers identify the most appropriate research design, sampling technique, and data collection and analysis methods.
  • A sound research methodology helps researchers ensure that their findings are valid and reliable and free from biases and errors.
  • It also helps ensure that ethical guidelines are followed while conducting research.
  • A good research methodology helps researchers in planning their research efficiently, by ensuring optimum usage of their time and resources.

Writing the methods section of a research paper? Let Paperpal help you achieve perfection

Types of research methodology.

There are three types of research methodology based on the type of research and the data required. 1

  • Quantitative research methodology focuses on measuring and testing numerical data. This approach is good for reaching a large number of people in a short amount of time. This type of research helps in testing the causal relationships between variables, making predictions, and generalizing results to wider populations.
  • Qualitative research methodology examines the opinions, behaviors, and experiences of people. It collects and analyzes words and textual data. This research methodology requires fewer participants but is still more time consuming because the time spent per participant is quite large. This method is used in exploratory research where the research problem being investigated is not clearly defined.
  • Mixed-method research methodology uses the characteristics of both quantitative and qualitative research methodologies in the same study. This method allows researchers to validate their findings, verify if the results observed using both methods are complementary, and explain any unexpected results obtained from one method by using the other method.

What are the types of sampling designs in research methodology?

Sampling 4 is an important part of a research methodology and involves selecting a representative sample of the population to conduct the study, making statistical inferences about them, and estimating the characteristics of the whole population based on these inferences. There are two types of sampling designs in research methodology—probability and nonprobability.

  • Probability sampling

In this type of sampling design, a sample is chosen from a larger population using some form of random selection, that is, every member of the population has an equal chance of being selected. The different types of probability sampling are:

  • Systematic —sample members are chosen at regular intervals. It requires selecting a starting point for the sample and sample size determination that can be repeated at regular intervals. This type of sampling method has a predefined range; hence, it is the least time consuming.
  • Stratified —researchers divide the population into smaller groups that don’t overlap but represent the entire population. While sampling, these groups can be organized, and then a sample can be drawn from each group separately.
  • Cluster —the population is divided into clusters based on demographic parameters like age, sex, location, etc.
  • Convenience —selects participants who are most easily accessible to researchers due to geographical proximity, availability at a particular time, etc.
  • Purposive —participants are selected at the researcher’s discretion. Researchers consider the purpose of the study and the understanding of the target audience.
  • Snowball —already selected participants use their social networks to refer the researcher to other potential participants.
  • Quota —while designing the study, the researchers decide how many people with which characteristics to include as participants. The characteristics help in choosing people most likely to provide insights into the subject.

What are data collection methods?

During research, data are collected using various methods depending on the research methodology being followed and the research methods being undertaken. Both qualitative and quantitative research have different data collection methods, as listed below.

Qualitative research 5

  • One-on-one interviews: Helps the interviewers understand a respondent’s subjective opinion and experience pertaining to a specific topic or event
  • Document study/literature review/record keeping: Researchers’ review of already existing written materials such as archives, annual reports, research articles, guidelines, policy documents, etc.
  • Focus groups: Constructive discussions that usually include a small sample of about 6-10 people and a moderator, to understand the participants’ opinion on a given topic.
  • Qualitative observation : Researchers collect data using their five senses (sight, smell, touch, taste, and hearing).

Quantitative research 6

  • Sampling: The most common type is probability sampling.
  • Interviews: Commonly telephonic or done in-person.
  • Observations: Structured observations are most commonly used in quantitative research. In this method, researchers make observations about specific behaviors of individuals in a structured setting.
  • Document review: Reviewing existing research or documents to collect evidence for supporting the research.
  • Surveys and questionnaires. Surveys can be administered both online and offline depending on the requirement and sample size.

Let Paperpal help you write the perfect research methods section. Start now!

What are data analysis methods.

The data collected using the various methods for qualitative and quantitative research need to be analyzed to generate meaningful conclusions. These data analysis methods 7 also differ between quantitative and qualitative research.

Quantitative research involves a deductive method for data analysis where hypotheses are developed at the beginning of the research and precise measurement is required. The methods include statistical analysis applications to analyze numerical data and are grouped into two categories—descriptive and inferential.

Descriptive analysis is used to describe the basic features of different types of data to present it in a way that ensures the patterns become meaningful. The different types of descriptive analysis methods are:

  • Measures of frequency (count, percent, frequency)
  • Measures of central tendency (mean, median, mode)
  • Measures of dispersion or variation (range, variance, standard deviation)
  • Measure of position (percentile ranks, quartile ranks)

Inferential analysis is used to make predictions about a larger population based on the analysis of the data collected from a smaller population. This analysis is used to study the relationships between different variables. Some commonly used inferential data analysis methods are:

  • Correlation: To understand the relationship between two or more variables.
  • Cross-tabulation: Analyze the relationship between multiple variables.
  • Regression analysis: Study the impact of independent variables on the dependent variable.
  • Frequency tables: To understand the frequency of data.
  • Analysis of variance: To test the degree to which two or more variables differ in an experiment.

Qualitative research involves an inductive method for data analysis where hypotheses are developed after data collection. The methods include:

  • Content analysis: For analyzing documented information from text and images by determining the presence of certain words or concepts in texts.
  • Narrative analysis: For analyzing content obtained from sources such as interviews, field observations, and surveys. The stories and opinions shared by people are used to answer research questions.
  • Discourse analysis: For analyzing interactions with people considering the social context, that is, the lifestyle and environment, under which the interaction occurs.
  • Grounded theory: Involves hypothesis creation by data collection and analysis to explain why a phenomenon occurred.
  • Thematic analysis: To identify important themes or patterns in data and use these to address an issue.

How to choose a research methodology?

Here are some important factors to consider when choosing a research methodology: 8

  • Research objectives, aims, and questions —these would help structure the research design.
  • Review existing literature to identify any gaps in knowledge.
  • Check the statistical requirements —if data-driven or statistical results are needed then quantitative research is the best. If the research questions can be answered based on people’s opinions and perceptions, then qualitative research is most suitable.
  • Sample size —sample size can often determine the feasibility of a research methodology. For a large sample, less effort- and time-intensive methods are appropriate.
  • Constraints —constraints of time, geography, and resources can help define the appropriate methodology.

Got writer’s block? Kickstart your research paper writing with Paperpal now!

How to write a research methodology .

A research methodology should include the following components: 3,9

  • Research design —should be selected based on the research question and the data required. Common research designs include experimental, quasi-experimental, correlational, descriptive, and exploratory.
  • Research method —this can be quantitative, qualitative, or mixed-method.
  • Reason for selecting a specific methodology —explain why this methodology is the most suitable to answer your research problem.
  • Research instruments —explain the research instruments you plan to use, mainly referring to the data collection methods such as interviews, surveys, etc. Here as well, a reason should be mentioned for selecting the particular instrument.
  • Sampling —this involves selecting a representative subset of the population being studied.
  • Data collection —involves gathering data using several data collection methods, such as surveys, interviews, etc.
  • Data analysis —describe the data analysis methods you will use once you’ve collected the data.
  • Research limitations —mention any limitations you foresee while conducting your research.
  • Validity and reliability —validity helps identify the accuracy and truthfulness of the findings; reliability refers to the consistency and stability of the results over time and across different conditions.
  • Ethical considerations —research should be conducted ethically. The considerations include obtaining consent from participants, maintaining confidentiality, and addressing conflicts of interest.

Streamline Your Research Paper Writing Process with Paperpal

The methods section is a critical part of the research papers, allowing researchers to use this to understand your findings and replicate your work when pursuing their own research. However, it is usually also the most difficult section to write. This is where Paperpal can help you overcome the writer’s block and create the first draft in minutes with Paperpal Copilot, its secure generative AI feature suite.  

With Paperpal you can get research advice, write and refine your work, rephrase and verify the writing, and ensure submission readiness, all in one place. Here’s how you can use Paperpal to develop the first draft of your methods section.  

  • Generate an outline: Input some details about your research to instantly generate an outline for your methods section 
  • Develop the section: Use the outline and suggested sentence templates to expand your ideas and develop the first draft.  
  • P araph ras e and trim : Get clear, concise academic text with paraphrasing that conveys your work effectively and word reduction to fix redundancies. 
  • Choose the right words: Enhance text by choosing contextual synonyms based on how the words have been used in previously published work.  
  • Check and verify text : Make sure the generated text showcases your methods correctly, has all the right citations, and is original and authentic. .   

You can repeat this process to develop each section of your research manuscript, including the title, abstract and keywords. Ready to write your research papers faster, better, and without the stress? Sign up for Paperpal and start writing today!

Frequently Asked Questions

Q1. What are the key components of research methodology?

A1. A good research methodology has the following key components:

  • Research design
  • Data collection procedures
  • Data analysis methods
  • Ethical considerations

Q2. Why is ethical consideration important in research methodology?

A2. Ethical consideration is important in research methodology to ensure the readers of the reliability and validity of the study. Researchers must clearly mention the ethical norms and standards followed during the conduct of the research and also mention if the research has been cleared by any institutional board. The following 10 points are the important principles related to ethical considerations: 10

  • Participants should not be subjected to harm.
  • Respect for the dignity of participants should be prioritized.
  • Full consent should be obtained from participants before the study.
  • Participants’ privacy should be ensured.
  • Confidentiality of the research data should be ensured.
  • Anonymity of individuals and organizations participating in the research should be maintained.
  • The aims and objectives of the research should not be exaggerated.
  • Affiliations, sources of funding, and any possible conflicts of interest should be declared.
  • Communication in relation to the research should be honest and transparent.
  • Misleading information and biased representation of primary data findings should be avoided.

Q3. What is the difference between methodology and method?

A3. Research methodology is different from a research method, although both terms are often confused. Research methods are the tools used to gather data, while the research methodology provides a framework for how research is planned, conducted, and analyzed. The latter guides researchers in making decisions about the most appropriate methods for their research. Research methods refer to the specific techniques, procedures, and tools used by researchers to collect, analyze, and interpret data, for instance surveys, questionnaires, interviews, etc.

Research methodology is, thus, an integral part of a research study. It helps ensure that you stay on track to meet your research objectives and answer your research questions using the most appropriate data collection and analysis tools based on your research design.

Accelerate your research paper writing with Paperpal. Try for free now!

  • Research methodologies. Pfeiffer Library website. Accessed August 15, 2023. https://library.tiffin.edu/researchmethodologies/whatareresearchmethodologies
  • Types of research methodology. Eduvoice website. Accessed August 16, 2023. https://eduvoice.in/types-research-methodology/
  • The basics of research methodology: A key to quality research. Voxco. Accessed August 16, 2023. https://www.voxco.com/blog/what-is-research-methodology/
  • Sampling methods: Types with examples. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/types-of-sampling-for-social-research/
  • What is qualitative research? Methods, types, approaches, examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-qualitative-research-methods-types-examples/
  • What is quantitative research? Definition, methods, types, and examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-quantitative-research-types-and-examples/
  • Data analysis in research: Types & methods. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/data-analysis-in-research/#Data_analysis_in_qualitative_research
  • Factors to consider while choosing the right research methodology. PhD Monster website. Accessed August 17, 2023. https://www.phdmonster.com/factors-to-consider-while-choosing-the-right-research-methodology/
  • What is research methodology? Research and writing guides. Accessed August 14, 2023. https://paperpile.com/g/what-is-research-methodology/
  • Ethical considerations. Business research methodology website. Accessed August 17, 2023. https://research-methodology.net/research-methodology/ethical-considerations/

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • Dangling Modifiers and How to Avoid Them in Your Writing 
  • Webinar: How to Use Generative AI Tools Ethically in Your Academic Writing
  • Research Outlines: How to Write An Introduction Section in Minutes with Paperpal Copilot
  • How to Paraphrase Research Papers Effectively

Language and Grammar Rules for Academic Writing

Climatic vs. climactic: difference and examples, you may also like, how to use paperpal to generate emails &..., ai in education: it’s time to change the..., is it ethical to use ai-generated abstracts without..., do plagiarism checkers detect ai content, word choice problems: how to use the right..., how to avoid plagiarism when using generative ai..., what are journal guidelines on using generative ai..., types of plagiarism and 6 tips to avoid..., how to write an essay introduction (with examples)..., similarity checks: the author’s guide to plagiarism and....

MBA Knowledge Base

Business • Management • Technology

Home » Research Methodology » Measurement Scales in Research Methodology

Measurement Scales in Research Methodology

Certain research data are qualitative in nature. Data on attitude, opinion or behavior of employees, customers, and sales persons etc., are qualitative. The terms “attitude” and “opinion” have frequently been differentiated in psychological and sociological investigations. A commonly drawn distinction has been to view an attitude as a predisposition to act in a certain way and an opinion as a verbalization of the attitude. Thus, a statement by a respondent that he prefers viewing color to black-and-white television programs would be an opinion expressing one aspect of the respondent’s attitude toward color television. Motivation , commitment, satisfaction, leadership effectiveness, etc involve attitude measurement based on revealed opinions. These qualitative data require measurement scales for being measured.

Types of Measurement Scales used in Research

There are four different scales of measurement used in research ; nominal, ordinal, interval and ratio. The rules used to assign numerals objects define the kind of scale and level of measurement. A brief account of each scaling type is given below;

  • Nominal Scales : Nominal scale is the simplest form of measurement. A variable measured on a nominal is one which is divided into two or more categories, for example, gender is categorized as male or female, a question as to whether a family owns a iPhone can be answered ‘Yes’ or ‘No’. It is simply a sorting operation in which all individuals or units or answers can be placed in one category or another (i.e. the categories are exhaustive). The essential characteristic of a nominal scale is that in terms of a given variable, one individual is different from another and the categories are discriminate (i.e. the categories are mutually exclusive). This characteristic of classification if fundamental to all scales of measurement. Nominal scales that consist only two categories such as female-male, agree-disagree,aware-unaware, yes-no, are unique and are called dichotomous scales. Such dichotomous nominal scales are important to researchers because the numerical labels for the two scale categories can be treated as though they are of interval scale value.
  • Ordinal Scales : Ordinal scales have all the properties of a nominal scale, but, in addition, categories can be ordered along a continuum, in terms of a given criterion. Given three categories A, B and C, on an ordinal scale, one might be able to say, for e.g., that A is greater than B and B is greater than C. If numerals are assigned to ordinal scale categories, the numerals serve only as ranks for ordering observations from least to most in terms of the characteristic measured and they do not indicate the distance between scale that organizes observations in terms of categories such as high, medium and low or strongly agree, agree, not sure, disagree, and strong disagree.
  • Interval Scales : Interval scales incorporate all the properties of nominal and ordinal scales and in addition, indicate the distance or interval between the categories. In formal terms one can say not only that A is greater than B and B is greater than C but also that (A-B)=(B-C) or (A-C)=(A-B)+(B-C). Examples of interval scale include age, income and investments. However, an interval scale is one where there is no absolute zero point. It can be placed anywhere along a continuum e.g., the age can be between 20 to 60 years and need not necessarily start from 0 years. This makes ratio comparison, that A is twice that of B or so wrong.
  • Ratio Scales : A special form of interval scale is the ratio scale which differs in that it has a true zero point or a point at which the characteristic that is measured is presumed to be absent. Examples of ratio scales include, weight, length, income, expenditure and others. In each there is a concept of zero income, zero weight, etc. Since ratio scales represent a refinement of interval scales, generally these scales are not distinguished and both the terms are used inter-changeably.

Each of the above four types of scales have a unique method of measurement. Both nominal and ordinal scales consist of discrete number of categories to which numbers are assigned. Thus, a variable such as number of families owning a BMW or iPhone can only take values of 0, 1, 2 3 4 etc. It cannot have values such as 1.5 or 2.5 as the units are integers and indivisible. But interval and ratio scales take any value between two integers, as the variables are continuous. For example, given any ages however close, it is possible to find a third which lies in between. Interval and ratio scales are superior to normal and ordinal scales and a wealth of statistical tools can be employed in their analysis. The different statistical tools are related to these different measurement  scales in research , in that there is usually a correspondence between mathematical assumptions of the statistical tool and the assumptions of the scale of measurement . Care must be always taken to match the tools used with the scale of measurement of variables and to use a method which implies a higher scale measurement than   the variable allows.

Related Posts:

  • Guide to the Development of Research Questionnaires
  • Social Research - Definition, Steps and Objectives
  • The Purpose of Research
  • Interview Method of Data Collection in Research
  • Schedule as a Data Collection Technique in Research
  • The Literature Review in Research
  • The Basic Types of Research
  • In-Depth Interviewing Techniques
  • Classification and Tabulation of Data in Research
  • Tips for Effective Research Interviews

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Quantitative Research? | Definition, Uses & Methods

What Is Quantitative Research? | Definition, Uses & Methods

Published on June 12, 2020 by Pritha Bhandari . Revised on June 22, 2023.

Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations.

Quantitative research is the opposite of qualitative research , which involves collecting and analyzing non-numerical data (e.g., text, video, or audio).

Quantitative research is widely used in the natural and social sciences: biology, chemistry, psychology, economics, sociology, marketing, etc.

  • What is the demographic makeup of Singapore in 2020?
  • How has the average temperature changed globally over the last century?
  • Does environmental pollution affect the prevalence of honey bees?
  • Does working from home increase productivity for people with long commutes?

Table of contents

Quantitative research methods, quantitative data analysis, advantages of quantitative research, disadvantages of quantitative research, other interesting articles, frequently asked questions about quantitative research.

You can use quantitative research methods for descriptive, correlational or experimental research.

  • In descriptive research , you simply seek an overall summary of your study variables.
  • In correlational research , you investigate relationships between your study variables.
  • In experimental research , you systematically examine whether there is a cause-and-effect relationship between variables.

Correlational and experimental research can both be used to formally test hypotheses , or predictions, using statistics. The results may be generalized to broader populations based on the sampling method used.

To collect quantitative data, you will often need to use operational definitions that translate abstract concepts (e.g., mood) into observable and quantifiable measures (e.g., self-ratings of feelings and energy levels).

Note that quantitative research is at risk for certain research biases , including information bias , omitted variable bias , sampling bias , or selection bias . Be sure that you’re aware of potential biases as you collect and analyze your data to prevent them from impacting your work too much.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

what is measurement in research methodology

Once data is collected, you may need to process it before it can be analyzed. For example, survey and test data may need to be transformed from words to numbers. Then, you can use statistical analysis to answer your research questions .

Descriptive statistics will give you a summary of your data and include measures of averages and variability. You can also use graphs, scatter plots and frequency tables to visualize your data and check for any trends or outliers.

Using inferential statistics , you can make predictions or generalizations based on your data. You can test your hypothesis or use your sample data to estimate the population parameter .

First, you use descriptive statistics to get a summary of the data. You find the mean (average) and the mode (most frequent rating) of procrastination of the two groups, and plot the data to see if there are any outliers.

You can also assess the reliability and validity of your data collection methods to indicate how consistently and accurately your methods actually measured what you wanted them to.

Quantitative research is often used to standardize data collection and generalize findings . Strengths of this approach include:

  • Replication

Repeating the study is possible because of standardized data collection protocols and tangible definitions of abstract concepts.

  • Direct comparisons of results

The study can be reproduced in other cultural settings, times or with different groups of participants. Results can be compared statistically.

  • Large samples

Data from large samples can be processed and analyzed using reliable and consistent procedures through quantitative data analysis.

  • Hypothesis testing

Using formalized and established hypothesis testing procedures means that you have to carefully consider and report your research variables, predictions, data collection and testing methods before coming to a conclusion.

Despite the benefits of quantitative research, it is sometimes inadequate in explaining complex research topics. Its limitations include:

  • Superficiality

Using precise and restrictive operational definitions may inadequately represent complex concepts. For example, the concept of mood may be represented with just a number in quantitative research, but explained with elaboration in qualitative research.

  • Narrow focus

Predetermined variables and measurement procedures can mean that you ignore other relevant observations.

  • Structural bias

Despite standardized procedures, structural biases can still affect quantitative research. Missing data , imprecise measurements or inappropriate sampling methods are biases that can lead to the wrong conclusions.

  • Lack of context

Quantitative research often uses unnatural settings like laboratories or fails to consider historical and cultural contexts that may affect data collection and results.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Quantitative Research? | Definition, Uses & Methods. Scribbr. Retrieved April 6, 2024, from https://www.scribbr.com/methodology/quantitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, descriptive statistics | definitions, types, examples, inferential statistics | an easy introduction & examples, what is your plagiarism score.

Combined 3D scanning measurement method based on guidance of full-space laser positioning system

  • Lin, Binfeng

Aiming at the problem of constraints between reconstruction range and splicing accuracy when 3D reconstruction is carried out on large-scale equipment in aerospace and other fields, we investigate the combined 3D scanning measurement method based on the guidance of the all-space laser positioning system (ALPS) on the basis of analyzing the fusion of the ALPS and scanning target. The method establishes a global 3D scanning measurement model based on multiple measurement points by Creo and simulates and analyzes the data with MATLAB, then designs the structure of the circumferential combined photoelectric receiver by combining with the receiving characteristics of photodetectors for laser signals in the full-space field, and then completes the optimal layout design of the receiver based on the accessibility. Finally, a 30 m×30 m experimental platform for the fusion of ALPS and scanning target is constructed, and the measured standard bat and Zeiss standard ball are used as the measured objects to verify the feasibility of the research method. The simulation and experimental results show that the optimal number of receivers needed for the global measurement system is N=6, the measurement accuracy is better than 0.5 mm, and the repeatability measurement accuracy is up to 0.02 mm, which provides a basis for the 3D scanning of large-scale equipment.

  • full-space laser positioning system;
  • multipoint fusion;
  • layout optimization;
  • circumferential combination photoelectric receiver

IMAGES

  1. PPT

    what is measurement in research methodology

  2. PPT

    what is measurement in research methodology

  3. ️ Measurement in research. Quantitative Scales of Measurement. 2019-02-15

    what is measurement in research methodology

  4. Quantitative Research

    what is measurement in research methodology

  5. Scaling techniques in research methodology with examples / Scales of Measurement with examples

    what is measurement in research methodology

  6. Types of scales used in research. 4 scales every researcher should

    what is measurement in research methodology

VIDEO

  1. Measurement (Research Methodology)

  2. Sources of Errors in Measurement

  3. Webinar Measurement Methodology 12 Dec 2023

  4. Levels of measurement

  5. Measurement Issue

  6. problems in measurement in research

COMMENTS

  1. Measurement

    Measurement is the process of observing and recording the observations that are collected as part of a research effort. There are two major issues that will be considered here. First, you have to understand the fundamental ideas involved in measuring. Here we consider two of major measurement concepts. In Levels of Measurement, I explain the ...

  2. PDF What Is Measurement?

    Measurement is essential to empirical research. Typically, a method used to collect data involves measuring many things. Understanding how any one thing is measured is central to understanding the entire research method. Scientific measurement has been defined as "rules for assigning numbers to

  3. Levels of Measurement

    In scientific research, a variable is anything that can take on different values across your data set (e.g., height or test scores). There are 4 levels of measurement: Nominal: the data can only be categorized. Ordinal: the data can be categorized and ranked. Interval: the data can be categorized, ranked, and evenly spaced.

  4. Measurement: The Basic Building Block of Research

    Measurement lies at the heart of statistics. Indeed, no statistic would be possible without the concept of measurement. In science, we use measurement to make accurate observations. All measurement must begin with a classification process—a process that in science is carried out according to systematic criteria.

  5. 4.1: What is Measurement?

    Measurement in social science is a process. It occurs at multiple stages of a research project: in the planning stages, in the data collection stage, and sometimes even in the analysis stage. Recall that previously we defined measurement as the process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena that ...

  6. Researcher's guide to 4 measurement scales: Nominal, ordinal, interval

    The ratio scale is the fourth level of measurement in research and has a zero point or character of origin. A ratio scale of measurement is quantitative, ... The interval scale provides a controlled and regulated method for collecting, analyzing, and interpreting data, which aids in the generation of insights and the making of informed ...

  7. Measurement Issues in Quantitative Research

    Measurement is central to empirical research whether observational or experimental. A study of a novel, well-defined research question can fall apart due to inappropriate measurement. Measurement is defined in a variety of ways (Last 2001; Thorndike 2007; Manoj and Lingyak 2014 ), yet common to all definitions is the systematic application of ...

  8. 10.1 What is measurement?

    Where does measurement fit in the process of designing research? Table 10.1 is intended as a partial review and outlines the general process researchers can follow to get from problem formulation to data collection, including measurement. Use the drop down feature in the table to view the examples for each component of the research process.

  9. Concept and Principles of Measurement

    The importance of measurement in research and technology is indisputable. Measurement is the fundamental mechanism of scientific study and development, and it allows to describe the different phenomena of the universe through the exact and general language of mathematics, without which it would be challenging to define practical or theoretical approaches from scientific investigation.

  10. Scale and Measurement in Research Methodology

    Measurement and Scale in Research Methodology. Measurement is the process of describing some property of a phenomenon under study and assigning a numerical value to it. Measurement is considered as the foundation of scientific inquiry. In our daily life, many things are measured continuously in different ways for different purposes.

  11. Measurements in quantitative research: how to select and ...

    Quantitative research is based on measurement and is conducted in a systematic, controlled manner. These measures enable researchers to perform statistical tests, analyze differences between groups, and determine the effectiveness of treatments. If something is not measurable, it cannot be tested. Keywords: measurements; quantitative research ...

  12. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  13. Understanding Psychological Measurement

    What Is Measurement? Measurement is the assignment of scores to individuals so that the scores represent some characteristic of the individuals.This very general definition is consistent with the kinds of measurement that everyone is familiar with—for example, weighing oneself by stepping onto a bathroom scale, or checking the internal temperature of a roasting turkey using a meat thermometer.

  14. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  15. How measurement science can improve confidence in research results

    Go to: Abstract. The current push for rigor and reproducibility is driven by a desire for confidence in research results. Here, we suggest a framework for a systematic process, based on consensus principles of measurement science, to guide researchers and reviewers in assessing, documenting, and mitigating the sources of uncertainty in a study.

  16. Reliability vs. Validity in Research

    Reliability is about the consistency of a measure, and validity is about the accuracy of a measure.opt. It's important to consider reliability and validity when you are creating your research design, planning your methods, and writing up your results, especially in quantitative research. Failing to do so can lead to several types of research ...

  17. Research Methodology and Measurement

    Policies and ethics. Scientific research refers to controlled, systematic, empirical, and critical investigation of natural phenomena that is guided by hypotheses and theory about supposed relations between such phenomena (Kerlinger, 1986). The method of science represents a method of...

  18. (Pdf) Measurement in Research

    Most of the measurements in Psychology a re on the interval scale. e.g. the Likert scale, RATIO MEASUREMENT. This is a further refinement in the measurement levels in that it provides us with ...

  19. What is Qualitative Measurement? Definition and Examples

    What is qualitative measurement? Qualitative measurement is a research method used to better understand a topic. It's most often used in projects or studies related to human thoughts and behavior. It involves non-numeric data and characteristics, so it can be observed or surveyed rather than counted or measured.

  20. What is Research Methodology? Definition, Types, and Examples

    Definition, Types, and Examples. Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of ...

  21. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. ... Quantitative methods allow you to systematically measure variables and test hypotheses. Qualitative methods allow you ...

  22. Measurement Scales in Research Methodology

    Types of Measurement Scales used in Research. There are four different scales of measurement used in research; nominal, ordinal, interval and ratio. The rules used to assign numerals objects define the kind of scale and level of measurement. A brief account of each scaling type is given below; Nominal Scales: Nominal scale is the simplest form ...

  23. An ELISA‐Based Method to Measure Mucosal Antibody Responses Against

    Hence, the protocol includes analysis methods that adjust the amount of spike-specific sIgA secreted in saliva based on the total concentration of IgA in the saliva sample. This allows for a measure of mucosal immunity independent of the variations caused by other factors that typically confound the quantification of sIgA in saliva.

  24. What Is Quantitative Research?

    Revised on June 22, 2023. Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations. Quantitative research is the opposite of qualitative research, which involves collecting and analyzing ...

  25. Combined 3D scanning measurement method based on guidance of full-space

    Abstract. Aiming at the problem of constraints between reconstruction range and splicing accuracy when 3D reconstruction is carried out on large-scale equipment in aerospace and other fields, we investigate the combined 3D scanning measurement method based on the guidance of the all-space laser positioning system (ALPS) on the basis of analyzing the fusion of the ALPS and scanning target.