• Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Papyrology
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Evolution
  • Language Reference
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Media
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business Ethics
  • Business History
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic History
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Theory
  • Politics and Law
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Defense of the Scientific Hypothesis: From Reproducibility Crisis to Big Data

  • < Previous chapter
  • Next chapter >

8 Advantages of the Hypothesis

  • Published: October 2019
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter makes the case for the scientific hypothesis from two quite different points of view: statistical and cognitive. The consideration of statistical advantages picks up from the discussion of the Reproducibility Crisis in the previous chapter. First, it explores reasoning that shows that hypothesis-based research will, as a general rule, be much more reliable than, for example, open-ended gene searches. It also revives a procedure, Fisher’s Method for Combining Results that, though rarely used nowadays, underscores the strengths of multiple testing of hypotheses. Second, the chapter goes into many cognitive advantages of hypothesis-based research that exist because the human mind is inherently and continually at work trying to understand the world. The hypothesis is a natural way of channeling this drive into science. It is also a powerful organizational tool that serves as a blueprint for investigations and helps organize scientific thinking and communications.

Signed in as

Institutional accounts.

  • GoogleCrawler [DO NOT DELETE]
  • Google Scholar Indexing

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code

Institutional access

  • Sign in with a library card Sign in with username/password Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Sign in through your institution

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Sign in with a library card

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 3: Developing a Research Question

3.4 Hypotheses

When researchers do not have predictions about what they will find, they conduct research to answer a question or questions with an open-minded desire to know about a topic, or to help develop hypotheses for later testing. In other situations, the purpose of research is to test a specific hypothesis or hypotheses. A hypothesis is a statement, sometimes but not always causal, describing a researcher’s expectations regarding anticipated finding. Often hypotheses are written to describe the expected relationship between two variables (though this is not a requirement). To develop a hypothesis, one needs to understand the differences between independent and dependent variables and between units of observation and units of analysis. Hypotheses are typically drawn from theories and usually describe how an independent variable is expected to affect some dependent variable or variables. Researchers following a deductive approach to their research will hypothesize about what they expect to find based on the theory or theories that frame their study. If the theory accurately reflects the phenomenon it is designed to explain, then the researcher’s hypotheses about what would be observed in the real world should bear out.

Sometimes researchers will hypothesize that a relationship will take a specific direction. As a result, an increase or decrease in one area might be said to cause an increase or decrease in another. For example, you might choose to study the relationship between age and legalization of marijuana. Perhaps you have done some reading in your spare time, or in another course you have taken. Based on the theories you have read, you hypothesize that “age is negatively related to support for marijuana legalization.” What have you just hypothesized? You have hypothesized that as people get older, the likelihood of their support for marijuana legalization decreases. Thus, as age moves in one direction (up), support for marijuana legalization moves in another direction (down). If writing hypotheses feels tricky, it is sometimes helpful to draw them out and depict each of the two hypotheses we have just discussed.

Note that you will almost never hear researchers say that they have proven their hypotheses. A statement that bold implies that a relationship has been shown to exist with absolute certainty and there is no chance that there are conditions under which the hypothesis would not bear out. Instead, researchers tend to say that their hypotheses have been supported (or not). This more cautious way of discussing findings allows for the possibility that new evidence or new ways of examining a relationship will be discovered. Researchers may also discuss a null hypothesis, one that predicts no relationship between the variables being studied. If a researcher rejects the null hypothesis, he or she is saying that the variables in question are somehow related to one another.

Quantitative and qualitative researchers tend to take different approaches when it comes to hypotheses. In quantitative research, the goal often is to empirically test hypotheses generated from theory. With a qualitative approach, on the other hand, a researcher may begin with some vague expectations about what he or she will find, but the aim is not to test one’s expectations against some empirical observations. Instead, theory development or construction is the goal. Qualitative researchers may develop theories from which hypotheses can be drawn and quantitative researchers may then test those hypotheses. Both types of research are crucial to understanding our social world, and both play an important role in the matter of hypothesis development and testing.  In the following section, we will look at qualitative and quantitative approaches to research, as well as mixed methods.

Text attributions This chapter has been adapted from Chapter 5.2 in Principles of Sociological Inquiry , which was adapted by the Saylor Academy without attribution to the original authors or publisher, as requested by the licensor, and is licensed under a CC BY-NC-SA 3.0 License .

Research Methods for the Social Sciences: An Introduction Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

1.3 Conducting Research in Social Psychology

Learning objectives.

  • Explain why social psychologists rely on empirical methods to study social behavior.
  • Provide examples of how social psychologists measure the variables they are interested in.
  • Review the three types of research designs, and evaluate the strengths and limitations of each type.
  • Consider the role of validity in research, and describe how research programs should be evaluated.

Social psychologists are not the only people interested in understanding and predicting social behavior or the only people who study it. Social behavior is also considered by religious leaders, philosophers, politicians, novelists, and others, and it is a common topic on TV shows. But the social psychological approach to understanding social behavior goes beyond the mere observation of human actions. Social psychologists believe that a true understanding of the causes of social behavior can only be obtained through a systematic scientific approach, and that is why they conduct scientific research. Social psychologists believe that the study of social behavior should be empirical —that is, based on the collection and systematic analysis of observable data .

The Importance of Scientific Research

Because social psychology concerns the relationships among people, and because we can frequently find answers to questions about human behavior by using our own common sense or intuition, many people think that it is not necessary to study it empirically (Lilienfeld, 2011). But although we do learn about people by observing others and therefore social psychology is in fact partly common sense, social psychology is not entirely common sense.

In case you are not convinced about this, perhaps you would be willing to test whether or not social psychology is just common sense by taking a short true-or-false quiz. If so, please have a look at Table 1.1 “Is Social Psychology Just Common Sense?” and respond with either “True” or “False.” Based on your past observations of people’s behavior, along with your own common sense, you will likely have answers to each of the questions on the quiz. But how sure are you? Would you be willing to bet that all, or even most, of your answers have been shown to be correct by scientific research? Would you be willing to accept your score on this quiz for your final grade in this class? If you are like most of the students in my classes, you will get at least some of these answers wrong. (To see the answers and a brief description of the scientific research supporting each of these topics, please go to the Chapter Summary at the end of this chapter.)

Table 1.1 Is Social Psychology Just Common Sense?

One of the reasons we might think that social psychology is common sense is that once we learn about the outcome of a given event (e.g., when we read about the results of a research project), we frequently believe that we would have been able to predict the outcome ahead of time. For instance, if half of a class of students is told that research concerning attraction between people has demonstrated that “opposites attract,” and if the other half is told that research has demonstrated that “birds of a feather flock together,” most of the students in both groups will report believing that the outcome is true and that they would have predicted the outcome before they had heard about it. Of course, both of these contradictory outcomes cannot be true. The problem is that just reading a description of research findings leads us to think of the many cases that we know that support the findings and thus makes them seem believable. The tendency to think that we could have predicted something that we probably would not have been able to predict is called the hindsight bias .

Our common sense also leads us to believe that we know why we engage in the behaviors that we engage in, when in fact we may not. Social psychologist Daniel Wegner and his colleagues have conducted a variety of studies showing that we do not always understand the causes of our own actions. When we think about a behavior before we engage in it, we believe that the thinking guided our behavior, even when it did not (Morewedge, Gray, & Wegner, 2010). People also report that they contribute more to solving a problem when they are led to believe that they have been working harder on it, even though the effort did not increase their contribution to the outcome (Preston & Wegner, 2007). These findings, and many others like them, demonstrate that our beliefs about the causes of social events, and even of our own actions, do not always match the true causes of those events.

Social psychologists conduct research because it often uncovers results that could not have been predicted ahead of time. Putting our hunches to the test exposes our ideas to scrutiny. The scientific approach brings a lot of surprises, but it also helps us test our explanations about behavior in a rigorous manner. It is important for you to understand the research methods used in psychology so that you can evaluate the validity of the research that you read about here, in other courses, and in your everyday life.

Social psychologists publish their research in scientific journals, and your instructor may require you to read some of these research articles. The most important social psychology journals are listed in Table 1.2 “Social Psychology Journals” . If you are asked to do a literature search on research in social psychology, you should look for articles from these journals.

Table 1.2 Social Psychology Journals

We’ll discuss the empirical approach and review the findings of many research projects throughout this book, but for now let’s take a look at the basics of how scientists use research to draw overall conclusions about social behavior. Keep in mind as you read this book, however, that although social psychologists are pretty good at understanding the causes of behavior, our predictions are a long way from perfect. We are not able to control the minds or the behaviors of others or to predict exactly what they will do in any given situation. Human behavior is complicated because people are complicated and because the social situations that they find themselves in every day are also complex. It is this complexity—at least for me—that makes studying people so interesting and fun.

Measuring Affect, Behavior, and Cognition

One important aspect of using an empirical approach to understand social behavior is that the concepts of interest must be measured ( Figure 1.4 “The Operational Definition” ). If we are interested in learning how much Sarah likes Robert, then we need to have a measure of her liking for him. But how, exactly, should we measure the broad idea of “liking”? In scientific terms, the characteristics that we are trying to measure are known as conceptual variables , and the particular method that we use to measure a variable of interest is called an operational definition .

For anything that we might wish to measure, there are many different operational definitions, and which one we use depends on the goal of the research and the type of situation we are studying. To better understand this, let’s look at an example of how we might operationally define “Sarah likes Robert.”

Figure 1.4 The Operational Definition

The Operational Definition: Sarah Likes Robert. Either Sarah says,

An idea or conceptual variable (such as “how much Sarah likes Robert”) is turned into a measure through an operational definition.

One approach to measurement involves directly asking people about their perceptions using self-report measures. Self-report measures are measures in which individuals are asked to respond to questions posed by an interviewer or on a questionnaire . Generally, because any one question might be misunderstood or answered incorrectly, in order to provide a better measure, more than one question is asked and the responses to the questions are averaged together. For example, an operational definition of Sarah’s liking for Robert might involve asking her to complete the following measure:

I enjoy being around Robert.

Strongly disagree 1 2 3 4 5 6 Strongly agree

I get along well with Robert.

I like Robert.

The operational definition would be the average of her responses across the three questions. Because each question assesses the attitude differently, and yet each question should nevertheless measure Sarah’s attitude toward Robert in some way, the average of the three questions will generally be a better measure than would any one question on its own.

Although it is easy to ask many questions on self-report measures, these measures have a potential disadvantage. As we have seen, people’s insights into their own opinions and their own behaviors may not be perfect, and they might also not want to tell the truth—perhaps Sarah really likes Robert, but she is unwilling or unable to tell us so. Therefore, an alternative to self-report that can sometimes provide a more valid measure is to measure behavior itself. Behavioral measures are measures designed to directly assess what people do . Instead of asking Sara how much she likes Robert, we might instead measure her liking by assessing how much time she spends with Robert or by coding how much she smiles at him when she talks to him. Some examples of behavioral measures that have been used in social psychological research are shown in Table 1.3 “Examples of Operational Definitions of Conceptual Variables That Have Been Used in Social Psychological Research” .

Table 1.3 Examples of Operational Definitions of Conceptual Variables That Have Been Used in Social Psychological Research

Social Neuroscience: Measuring Social Responses in the Brain

Still another approach to measuring our thoughts and feelings is to measure brain activity, and recent advances in brain science have created a wide variety of new techniques for doing so. One approach, known as electroencephalography (EEG) , is a technique that records the electrical activity produced by the brain’s neurons through the use of electrodes that are placed around the research participant’s head . An electroencephalogram (EEG) can show if a person is asleep, awake, or anesthetized because the brain wave patterns are known to differ during each state. An EEG can also track the waves that are produced when a person is reading, writing, and speaking with others. A particular advantage of the technique is that the participant can move around while the recordings are being taken, which is useful when measuring brain activity in children who often have difficulty keeping still. Furthermore, by following electrical impulses across the surface of the brain, researchers can observe changes over very fast time periods.

A woman wearing an EEG cap

This woman is wearing an EEG cap.

goocy – Research – CC BY-NC 2.0.

Although EEGs can provide information about the general patterns of electrical activity within the brain, and although they allow the researcher to see these changes quickly as they occur in real time, the electrodes must be placed on the surface of the skull, and each electrode measures brain waves from large areas of the brain. As a result, EEGs do not provide a very clear picture of the structure of the brain.

But techniques exist to provide more specific brain images. Functional magnetic resonance imaging (fMRI) is a neuroimaging technique that uses a magnetic field to create images of brain structure and function . In research studies that use the fMRI, the research participant lies on a bed within a large cylindrical structure containing a very strong magnet. Nerve cells in the brain that are active use more oxygen, and the need for oxygen increases blood flow to the area. The fMRI detects the amount of blood flow in each brain region and thus is an indicator of which parts of the brain are active.

Very clear and detailed pictures of brain structures (see Figure 1.5 “Functional Magnetic Resonance Imaging (fMRI)” ) can be produced via fMRI. Often, the images take the form of cross-sectional “slices” that are obtained as the magnetic field is passed across the brain. The images of these slices are taken repeatedly and are superimposed on images of the brain structure itself to show how activity changes in different brain structures over time. Normally, the research participant is asked to engage in tasks while in the scanner, for instance, to make judgments about pictures of people, to solve problems, or to make decisions about appropriate behaviors. The fMRI images show which parts of the brain are associated with which types of tasks. Another advantage of the fMRI is that is it noninvasive. The research participant simply enters the machine and the scans begin.

Figure 1.5 Functional Magnetic Resonance Imaging (fMRI)

an fMRI image and an MRI machine

The fMRI creates images of brain structure and activity. In this image, the red and yellow areas represent increased blood flow and thus increased activity.

Reigh LeBlanc – Reigh’s Brain rlwat – CC BY-NC 2.0; Wikimedia Commons – public domain.

Although the scanners themselves are expensive, the advantages of fMRIs are substantial, and scanners are now available in many university and hospital settings. The fMRI is now the most commonly used method of learning about brain structure, and it has been employed by social psychologists to study social cognition, attitudes, morality, emotions, responses to being rejected by others, and racial prejudice, to name just a few topics (Eisenberger, Lieberman, & Williams, 2003; Greene, Sommerville, Nystrom, Darley, & Cohen, 2001; Lieberman, Hariri, Jarcho, Eisenberger, & Bookheimer, 2005; Ochsner, Bunge, Gross, & Gabrieli, 2002; Richeson et al., 2003).

Observational Research

Once we have decided how to measure our variables, we can begin the process of research itself. As you can see in Table 1.4 “Three Major Research Designs Used by Social Psychologists” , there are three major approaches to conducting research that are used by social psychologists—the observational approach , the correlational approach , and the experimental approach . Each approach has some advantages and disadvantages.

Table 1.4 Three Major Research Designs Used by Social Psychologists

The most basic research design, observational research , is research that involves making observations of behavior and recording those observations in an objective manner . Although it is possible in some cases to use observational data to draw conclusions about the relationships between variables (e.g., by comparing the behaviors of older versus younger children on a playground), in many cases the observational approach is used only to get a picture of what is happening to a given set of people at a given time and how they are responding to the social situation. In these cases, the observational approach involves creating a type of “snapshot” of the current state of affairs.

One advantage of observational research is that in many cases it is the only possible approach to collecting data about the topic of interest. A researcher who is interested in studying the impact of a hurricane on the residents of New Orleans, the reactions of New Yorkers to a terrorist attack, or the activities of the members of a religious cult cannot create such situations in a laboratory but must be ready to make observations in a systematic way when such events occur on their own. Thus observational research allows the study of unique situations that could not be created by the researcher. Another advantage of observational research is that the people whose behavior is being measured are doing the things they do every day, and in some cases they may not even know that their behavior is being recorded.

One early observational study that made an important contribution to understanding human behavior was reported in a book by Leon Festinger and his colleagues (Festinger, Riecken, & Schachter, 1956). The book, called When Prophecy Fails , reported an observational study of the members of a “doomsday” cult. The cult members believed that they had received information, supposedly sent through “automatic writing” from a planet called “Clarion,” that the world was going to end. More specifically, the group members were convinced that the earth would be destroyed, as the result of a gigantic flood, sometime before dawn on December 21, 1954.

When Festinger learned about the cult, he thought that it would be an interesting way to study how individuals in groups communicate with each other to reinforce their extreme beliefs. He and his colleagues observed the members of the cult over a period of several months, beginning in July of the year in which the flood was expected. The researchers collected a variety of behavioral and self-report measures by observing the cult, recording the conversations among the group members, and conducting detailed interviews with them. Festinger and his colleagues also recorded the reactions of the cult members, beginning on December 21, when the world did not end as they had predicted. This observational research provided a wealth of information about the indoctrination patterns of cult members and their reactions to disconfirmed predictions. This research also helped Festinger develop his important theory of cognitive dissonance.

Despite their advantages, observational research designs also have some limitations. Most important, because the data that are collected in observational studies are only a description of the events that are occurring, they do not tell us anything about the relationship between different variables. However, it is exactly this question that correlational research and experimental research are designed to answer.

The Research Hypothesis

Because social psychologists are generally interested in looking at relationships among variables, they begin by stating their predictions in the form of a precise statement known as a research hypothesis . A research hypothesis is a statement about the relationship between the variables of interest and about the specific direction of that relationship . For instance, the research hypothesis “People who are more similar to each other will be more attracted to each other” predicts that there is a relationship between a variable called similarity and another variable called attraction. In the research hypothesis “The attitudes of cult members become more extreme when their beliefs are challenged,” the variables that are expected to be related are extremity of beliefs and the degree to which the cults’ beliefs are challenged.

Because the research hypothesis states both that there is a relationship between the variables and the direction of that relationship, it is said to be falsifiable . Being falsifiable means that the outcome of the research can demonstrate empirically either that there is support for the hypothesis (i.e., the relationship between the variables was correctly specified) or that there is actually no relationship between the variables or that the actual relationship is not in the direction that was predicted . Thus the research hypothesis that “people will be more attracted to others who are similar to them” is falsifiable because the research could show either that there was no relationship between similarity and attraction or that people we see as similar to us are seen as less attractive than those who are dissimilar.

Correlational Research

The goal of correlational research is to search for and test hypotheses about the relationships between two or more variables. In the simplest case, the correlation is between only two variables, such as that between similarity and liking, or between gender (male versus female) and helping.

In a correlational design, the research hypothesis is that there is an association (i.e., a correlation) between the variables that are being measured. For instance, many researchers have tested the research hypothesis that a positive correlation exists between the use of violent video games and the incidence of aggressive behavior, such that people who play violent video games more frequently would also display more aggressive behavior.

Playing violent video games may lead to aggressive behavior, but aggressive behavior may lead to playing violent video games

A statistic known as the Pearson correlation coefficient (symbolized by the letter r ) is normally used to summarize the association, or correlation, between two variables. The correlation coefficient can range from −1 (indicating a very strong negative relationship between the variables) to +1 (indicating a very strong positive relationship between the variables). Research has found that there is a positive correlation between the use of violent video games and the incidence of aggressive behavior and that the size of the correlation is about r = .30 (Bushman & Huesmann, 2010).

One advantage of correlational research designs is that, like observational research (and in comparison with experimental research designs in which the researcher frequently creates relatively artificial situations in a laboratory setting), they are often used to study people doing the things that they do every day. And correlational research designs also have the advantage of allowing prediction. When two or more variables are correlated, we can use our knowledge of a person’s score on one of the variables to predict his or her likely score on another variable. Because high-school grade point averages are correlated with college grade point averages, if we know a person’s high-school grade point average, we can predict his or her likely college grade point average. Similarly, if we know how many violent video games a child plays, we can predict how aggressively he or she will behave. These predictions will not be perfect, but they will allow us to make a better guess than we would have been able to if we had not known the person’s score on the first variable ahead of time.

Despite their advantages, correlational designs have a very important limitation. This limitation is that they cannot be used to draw conclusions about the causal relationships among the variables that have been measured. An observed correlation between two variables does not necessarily indicate that either one of the variables caused the other. Although many studies have found a correlation between the number of violent video games that people play and the amount of aggressive behaviors they engage in, this does not mean that viewing the video games necessarily caused the aggression. Although one possibility is that playing violent games increases aggression,

Playing violent video games may lead to aggressive behavior

another possibility is that the causal direction is exactly opposite to what has been hypothesized. Perhaps increased aggressiveness causes more interest in, and thus increased viewing of, violent games. Although this causal relationship might not seem as logical to you, there is no way to rule out the possibility of such reverse causation on the basis of the observed correlation.

Aggressive behavior may lead to playing violent video games

Still another possible explanation for the observed correlation is that it has been produced by the presence of another variable that was not measured in the research. Common-causal variables (also known as third variables) are variables that are not part of the research hypothesis but that cause both the predictor and the outcome variable and thus produce the observed correlation between them ( Figure 1.6 “Correlation and Causality” ). It has been observed that students who sit in the front of a large class get better grades than those who sit in the back of the class. Although this could be because sitting in the front causes the student to take better notes or to understand the material better, the relationship could also be due to a common-causal variable, such as the interest or motivation of the students to do well in the class. Because a student’s interest in the class leads him or her to both get better grades and sit nearer to the teacher, seating position and class grade are correlated, even though neither one caused the other.

Figure 1.6 Correlation and Causality

Where we sit in the class may correlate with our course grade, however, interest in the class, intelligence, and motivation to get good grades could also influences that decision

The correlation between where we sit in a large class and our grade in the class is likely caused by the influence of one or more common-causal variables.

The possibility of common-causal variables must always be taken into account when considering correlational research designs. For instance, in a study that finds a correlation between playing violent video games and aggression, it is possible that a common-causal variable is producing the relationship. Some possibilities include the family background, diet, and hormone levels of the children. Any or all of these potential common-causal variables might be creating the observed correlation between playing violent video games and aggression. Higher levels of the male sex hormone testosterone, for instance, may cause children to both watch more violent TV and behave more aggressively.

I like to think of common-causal variables in correlational research designs as “mystery” variables, since their presence and identity is usually unknown to the researcher because they have not been measured. Because it is not possible to measure every variable that could possibly cause both variables, it is always possible that there is an unknown common-causal variable. For this reason, we are left with the basic limitation of correlational research: Correlation does not imply causation.

Experimental Research

The goal of much research in social psychology is to understand the causal relationships among variables, and for this we use experiments. Experimental research designs are research designs that include the manipulation of a given situation or experience for two or more groups of individuals who are initially created to be equivalent, followed by a measurement of the effect of that experience .

In an experimental research design, the variables of interest are called the independent variables and the dependent variables. The independent variable refers to the situation that is created by the experimenter through the experimental manipulations , and the dependent variable refers to the variable that is measured after the manipulations have occurred . In an experimental research design, the research hypothesis is that the manipulated independent variable (or variables) causes changes in the measured dependent variable (or variables). We can diagram the prediction like this, using an arrow that points in one direction to demonstrate the expected direction of causality:

viewing violence (independent variable) → aggressive behavior (dependent variable)

Consider an experiment conducted by Anderson and Dill (2000), which was designed to directly test the hypothesis that viewing violent video games would cause increased aggressive behavior. In this research, male and female undergraduates from Iowa State University were given a chance to play either a violent video game (Wolfenstein 3D) or a nonviolent video game (Myst). During the experimental session, the participants played the video game that they had been given for 15 minutes. Then, after the play, they participated in a competitive task with another student in which they had a chance to deliver blasts of white noise through the earphones of their opponent. The operational definition of the dependent variable (aggressive behavior) was the level and duration of noise delivered to the opponent. The design and the results of the experiment are shown in Figure 1.7 “An Experimental Research Design (After Anderson & Dill, 2000)” .

Figure 1.7 An Experimental Research Design (After Anderson & Dill, 2000)

Two advantages of the experimental research design are an assurance that the independent variable (also known as the experimental manipulation) occurs prior to the measured dependent variable and the creation of initial equivalence between the conditions of the experiment.

Two advantages of the experimental research design are (a) an assurance that the independent variable (also known as the experimental manipulation) occurs prior to the measured dependent variable and (b) the creation of initial equivalence between the conditions of the experiment (in this case, by using random assignment to conditions).

Experimental designs have two very nice features. For one, they guarantee that the independent variable occurs prior to measuring the dependent variable. This eliminates the possibility of reverse causation. Second, the experimental manipulation allows ruling out the possibility of common-causal variables that cause both the independent variable and the dependent variable. In experimental designs, the influence of common-causal variables is controlled, and thus eliminated, by creating equivalence among the participants in each of the experimental conditions before the manipulation occurs.

The most common method of creating equivalence among the experimental conditions is through random assignment to conditions , which involves determining separately for each participant which condition he or she will experience through a random process, such as drawing numbers out of an envelope or using a website such as http://randomizer.org . Anderson and Dill first randomly assigned about 100 participants to each of their two groups. Let’s call them Group A and Group B. Because they used random assignment to conditions, they could be confident that before the experimental manipulation occurred , the students in Group A were, on average , equivalent to the students in Group B on every possible variable , including variables that are likely to be related to aggression, such as family, peers, hormone levels, and diet—and, in fact, everything else.

Then, after they had created initial equivalence, Anderson and Dill created the experimental manipulation—they had the participants in Group A play the violent video game and the participants in Group B the nonviolent video game. Then they compared the dependent variable (the white noise blasts) between the two groups and found that the students who had viewed the violent video game gave significantly longer noise blasts than did the students who had played the nonviolent game. Because they had created initial equivalence between the groups, when the researchers observed differences in the duration of white noise blasts between the two groups after the experimental manipulation, they could draw the conclusion that it was the independent variable (and not some other variable) that caused these differences. The idea is that the only thing that was different between the students in the two groups was which video game they had played.

When we create a situation in which the groups of participants are expected to be equivalent before the experiment begins, when we manipulate the independent variable before we measure the dependent variable, and when we change only the nature of independent variables between the conditions, then we can be confident that it is the independent variable that caused the differences in the dependent variable. Such experiments are said to have high internal validity , where internal validity refers to the confidence with which we can draw conclusions about the causal relationship between the variables .

Despite the advantage of determining causation, experimental research designs do have limitations. One is that the experiments are usually conducted in laboratory situations rather than in the everyday lives of people. Therefore, we do not know whether results that we find in a laboratory setting will necessarily hold up in everyday life. To counter this, in some cases experiments are conducted in everyday settings—for instance, in schools or other organizations . Such field experiments are difficult to conduct because they require a means of creating random assignment to conditions, and this is frequently not possible in natural settings.

A second and perhaps more important limitation of experimental research designs is that some of the most interesting and important social variables cannot be experimentally manipulated. If we want to study the influence of the size of a mob on the destructiveness of its behavior, or to compare the personality characteristics of people who join suicide cults with those of people who do not join suicide cults, these relationships must be assessed using correlational designs because it is simply not possible to manipulate mob size or cult membership.

Factorial Research Designs

Social psychological experiments are frequently designed to simultaneously study the effects of more than one independent variable on a dependent variable. Factorial research designs are experimental designs that have two or more independent variables . By using a factorial design, the scientist can study the influence of each variable on the dependent variable (known as the main effects of the variables) as well as how the variables work together to influence the dependent variable (known as the interaction between the variables). Factorial designs sometimes demonstrate the person by situation interaction.

In one such study, Brian Meier and his colleagues (Meier, Robinson, & Wilkowski, 2006) tested the hypothesis that exposure to aggression-related words would increase aggressive responses toward others. Although they did not directly manipulate the social context, they used a technique common in social psychology in which they primed (i.e., activated) thoughts relating to social settings. In their research, half of their participants were randomly assigned to see words relating to aggression and the other half were assigned to view neutral words that did not relate to aggression. The participants in the study also completed a measure of individual differences in agreeableness —a personality variable that assesses the extent to which the person sees themselves as compassionate, cooperative, and high on other-concern.

Then the research participants completed a task in which they thought they were competing with another student. Participants were told that they should press the space bar on the computer as soon as they heard a tone over their headphones, and the person who pressed the button the fastest would be the winner of the trial. Before the first trial, participants set the intensity of a blast of white noise that would be delivered to the loser of the trial. The participants could choose an intensity ranging from 0 (no noise) to the most aggressive response (10, or 105 decibels). In essence, participants controlled a “weapon” that could be used to blast the opponent with aversive noise, and this setting became the dependent variable. At this point, the experiment ended.

Figure 1.8 A Person-Situation Interaction

In this experiment by Meier, Robinson, and Wilkowski (2006) the independent variables are type of priming (aggression or neutral) and participant agreeableness (high or low). The dependent variable is the white noise level selected (a measure of aggression). The participants who were low in agreeableness became significantly more aggressive after seeing aggressive words, but those high in agreeableness did not.

In this experiment by Meier, Robinson, and Wilkowski (2006) the independent variables are type of priming (aggression or neutral) and participant agreeableness (high or low). The dependent variable is the white noise level selected (a measure of aggression). The participants who were low in agreeableness became significantly more aggressive after seeing aggressive words, but those high in agreeableness did not.

As you can see in Figure 1.8 “A Person-Situation Interaction” , there was a person by situation interaction. Priming with aggression-related words (the situational variable) increased the noise levels selected by participants who were low on agreeableness, but priming did not increase aggression (in fact, it decreased it a bit) for students who were high on agreeableness. In this study, the social situation was important in creating aggression, but it had different effects for different people.

Deception in Social Psychology Experiments

You may have wondered whether the participants in the video game study and that we just discussed were told about the research hypothesis ahead of time. In fact, these experiments both used a cover story — a false statement of what the research was really about . The students in the video game study were not told that the study was about the effects of violent video games on aggression, but rather that it was an investigation of how people learn and develop skills at motor tasks like video games and how these skills affect other tasks, such as competitive games. The participants in the task performance study were not told that the research was about task performance . In some experiments, the researcher also makes use of an experimental confederate — a person who is actually part of the experimental team but who pretends to be another participant in the study . The confederate helps create the right “feel” of the study, making the cover story seem more real.

In many cases, it is not possible in social psychology experiments to tell the research participants about the real hypotheses in the study, and so cover stories or other types of deception may be used. You can imagine, for instance, that if a researcher wanted to study racial prejudice, he or she could not simply tell the participants that this was the topic of the research because people may not want to admit that they are prejudiced, even if they really are. Although the participants are always told—through the process of informed consent —as much as is possible about the study before the study begins, they may nevertheless sometimes be deceived to some extent. At the end of every research project, however, participants should always receive a complete debriefing in which all relevant information is given, including the real hypothesis, the nature of any deception used, and how the data are going to be used.

Interpreting Research

No matter how carefully it is conducted or what type of design is used, all research has limitations. Any given research project is conducted in only one setting and assesses only one or a few dependent variables. And any one study uses only one set of research participants. Social psychology research is sometimes criticized because it frequently uses college students from Western cultures as participants (Henrich, Heine, & Norenzayan, 2010). But relationships between variables are only really important if they can be expected to be found again when tested using other research designs, other operational definitions of the variables, other participants, and other experimenters, and in other times and settings.

External validity refers to the extent to which relationships can be expected to hold up when they are tested again in different ways and for different people . Science relies primarily upon replication —that is, the repeating of research —to study the external validity of research findings. Sometimes the original research is replicated exactly, but more often, replications involve using new operational definitions of the independent or dependent variables, or designs in which new conditions or variables are added to the original design. And to test whether a finding is limited to the particular participants used in a given research project, scientists may test the same hypotheses using people from different ages, backgrounds, or cultures. Replication allows scientists to test the external validity as well as the limitations of research findings.

In some cases, researchers may test their hypotheses, not by conducting their own study, but rather by looking at the results of many existing studies, using a meta-analysis — a statistical procedure in which the results of existing studies are combined to determine what conclusions can be drawn on the basis of all the studies considered together . For instance, in one meta-analysis, Anderson and Bushman (2001) found that across all the studies they could locate that included both children and adults, college students and people who were not in college, and people from a variety of different cultures, there was a clear positive correlation (about r = .30) between playing violent video games and acting aggressively. The summary information gained through a meta-analysis allows researchers to draw even clearer conclusions about the external validity of a research finding.

Figure 1.9 Some Important Aspects of the Scientific Approach

Scientists generate research hypotheses, which are tested using an observational, correlational, or experimental research design. The variables of interest are measured using self-report or behavioral measures. Data is interpreted according to its validity (including internal validity and external validity). The results of many studies may be combined and summarized using meta-analysis.

It is important to realize that the understanding of social behavior that we gain by conducting research is a slow, gradual, and cumulative process. The research findings of one scientist or one experiment do not stand alone—no one study “proves” a theory or a research hypothesis. Rather, research is designed to build on, add to, and expand the existing research that has been conducted by other scientists. That is why whenever a scientist decides to conduct research, he or she first reads journal articles and book chapters describing existing research in the domain and then designs his or her research on the basis of the prior findings. The result of this cumulative process is that over time, research findings are used to create a systematic set of knowledge about social psychology ( Figure 1.9 “Some Important Aspects of the Scientific Approach” ).

Key Takeaways

  • Social psychologists study social behavior using an empirical approach. This allows them to discover results that could not have been reliably predicted ahead of time and that may violate our common sense and intuition.
  • The variables that form the research hypothesis, known as conceptual variables, are assessed using measured variables by using, for instance, self-report, behavioral, or neuroimaging measures.
  • Observational research is research that involves making observations of behavior and recording those observations in an objective manner. In some cases, it may be the only approach to studying behavior.
  • Correlational and experimental research designs are based on developing falsifiable research hypotheses.
  • Correlational research designs allow prediction but cannot be used to make statements about causality. Experimental research designs in which the independent variable is manipulated can be used to make statements about causality.
  • Social psychological experiments are frequently factorial research designs in which the effects of more than one independent variable on a dependent variable are studied.
  • All research has limitations, which is why scientists attempt to replicate their results using different measures, populations, and settings and to summarize those results using meta-analyses.

Exercises and Critical Thinking

1. Find journal articles that report observational, correlational, and experimental research designs. Specify the research design, the research hypothesis, and the conceptual and measured variables in each design. 2.

Consider the following variables that might have contributed to teach of the following events. For each one, (a) propose a research hypothesis in which the variable serves as an independent variable and (b) propose a research hypothesis in which the variable serves as a dependent variable.

  • Liking another person
  • Life satisfaction

Anderson, C. A., & Bushman, B. J. (2001). Effects of violent video games on aggressive behavior, aggressive cognition, aggressive affect, physiological arousal, and prosocial behavior: A meta-analytic review of the scientific literature. Psychological Science, 12 (5), 353–359.

Anderson, C. A., & Dill, K. E. (2000). Video games and aggressive thoughts, feelings, and behavior in the laboratory and in life. Journal of Personality and Social Psychology, 78 (4), 772–790.

Bushman, B. J., & Huesmann, L. R. (2010). Aggression. In S. T. Fiske, D. T. Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (5th ed., Vol. 2, pp. 833–863). Hoboken, NJ: John Wiley & Sons.

Eisenberger, N. I., Lieberman, M. D., & Williams, K. D. (2003). Does rejection hurt? An fMRI study of social exclusion. Science, 302 (5643), 290–292.

Festinger, L., Riecken, H. W., & Schachter, S. (1956). When prophecy fails: A social and psychological study of a modern group that predicted the destruction of the world . Minneapolis, MN: University of Minnesota Press.

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293 (5537), 2105–2108.

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33 (2–3), 61–83.

Lieberman, M. D., Hariri, A., Jarcho, J. M., Eisenberger, N. I., & Bookheimer, S. Y. (2005). An fMRI investigation of race-related amygdala activity in African-American and Caucasian-American individuals. Nature Neuroscience, 8 (6), 720–722.

Lilienfeld, S. O. (2011, June 13). Public skepticism of psychology: Why many people perceive the study of human behavior as unscientific. American Psychologist. doi: 10.1037/a0023963

Meier, B. P., Robinson, M. D., & Wilkowski, B. M. (2006). Turning the other cheek: Agreeableness and the regulation of aggression-related crimes. Psychological Science, 17 (2), 136–142.

Morewedge, C. K., Gray, K., & Wegner, D. M. (2010). Perish the forethought: Premeditation engenders misperceptions of personal control. In R. R. Hassin, K. N. Ochsner, & Y. Trope (Eds.), Self-control in society, mind, and brain (pp. 260–278). New York, NY: Oxford University Press.

Ochsner, K. N., Bunge, S. A., Gross, J. J., & Gabrieli, J. D. E. (2002). Rethinking feelings: An fMRI study of the cognitive regulation of emotion. Journal of Cognitive Neuroscience, 14 (8), 1215–1229.

Preston, J., & Wegner, D. M. (2007). The eureka error: Inadvertent plagiarism by misattributions of effort. Journal of Personality and Social Psychology, 92 (4), 575–584.

Richeson, J. A., Baird, A. A., Gordon, H. L., Heatherton, T. F., Wyland, C. L., Trawalter, S., Richeson, J. A., Baird, A. A., Gordon, H. L., Heatherton, T. F., Wyland, C. L., Trawalter, S., et al.#8230.

Shelton, J. N. (2003). An fMRI investigation of the impact of interracial contact on executive function. Nature Neuroscience, 6 (12), 1323–1328.

Principles of Social Psychology Copyright © 2015 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

3.1.3: Developing Theories and Hypotheses

  • Last updated
  • Save as PDF
  • Page ID 109843

2.5: Developing a Hypothesis

Learning objectives.

  • Distinguish between a theory and a hypothesis.
  • Discover how theories are used to generate hypotheses and how the results of studies can be used to further inform theories.
  • Understand the characteristics of a good hypothesis.

Theories and Hypotheses

Before describing how to develop a hypothesis, it is important to distinguish between a theory and a hypothesis. A theory is a coherent explanation or interpretation of one or more phenomena. Although theories can take a variety of forms, one thing they have in common is that they go beyond the phenomena they explain by including variables, structures, processes, functions, or organizing principles that have not been observed directly. Consider, for example, Zajonc’s theory of social facilitation and social inhibition (1965) [1] . He proposed that being watched by others while performing a task creates a general state of physiological arousal, which increases the likelihood of the dominant (most likely) response. So for highly practiced tasks, being watched increases the tendency to make correct responses, but for relatively unpracticed tasks, being watched increases the tendency to make incorrect responses. Notice that this theory—which has come to be called drive theory—provides an explanation of both social facilitation and social inhibition that goes beyond the phenomena themselves by including concepts such as “arousal” and “dominant response,” along with processes such as the effect of arousal on the dominant response.

Outside of science, referring to an idea as a theory often implies that it is untested—perhaps no more than a wild guess. In science, however, the term theory has no such implication. A theory is simply an explanation or interpretation of a set of phenomena. It can be untested, but it can also be extensively tested, well supported, and accepted as an accurate description of the world by the scientific community. The theory of evolution by natural selection, for example, is a theory because it is an explanation of the diversity of life on earth—not because it is untested or unsupported by scientific research. On the contrary, the evidence for this theory is overwhelmingly positive and nearly all scientists accept its basic assumptions as accurate. Similarly, the “germ theory” of disease is a theory because it is an explanation of the origin of various diseases, not because there is any doubt that many diseases are caused by microorganisms that infect the body.

A hypothesis , on the other hand, is a specific prediction about a new phenomenon that should be observed if a particular theory is accurate. It is an explanation that relies on just a few key concepts. Hypotheses are often specific predictions about what will happen in a particular study. They are developed by considering existing evidence and using reasoning to infer what will happen in the specific context of interest. Hypotheses are often but not always derived from theories. So a hypothesis is often a prediction based on a theory but some hypotheses are a-theoretical and only after a set of observations have been made, is a theory developed. This is because theories are broad in nature and they explain larger bodies of data. So if our research question is really original then we may need to collect some data and make some observations before we can develop a broader theory.

Theories and hypotheses always have this if-then relationship. “ If drive theory is correct, then cockroaches should run through a straight runway faster, and a branching runway more slowly, when other cockroaches are present.” Although hypotheses are usually expressed as statements, they can always be rephrased as questions. “Do cockroaches run through a straight runway faster when other cockroaches are present?” Thus deriving hypotheses from theories is an excellent way of generating interesting research questions.

But how do researchers derive hypotheses from theories? One way is to generate a research question using the techniques discussed in this chapter and then ask whether any theory implies an answer to that question. For example, you might wonder whether expressive writing about positive experiences improves health as much as expressive writing about traumatic experiences. Although this question is an interesting one on its own, you might then ask whether the habituation theory—the idea that expressive writing causes people to habituate to negative thoughts and feelings—implies an answer. In this case, it seems clear that if the habituation theory is correct, then expressive writing about positive experiences should not be effective because it would not cause people to habituate to negative thoughts and feelings. A second way to derive hypotheses from theories is to focus on some component of the theory that has not yet been directly observed. For example, a researcher could focus on the process of habituation—perhaps hypothesizing that people should show fewer signs of emotional distress with each new writing session.

Among the very best hypotheses are those that distinguish between competing theories. For example, Norbert Schwarz and his colleagues considered two theories of how people make judgments about themselves, such as how assertive they are (Schwarz et al., 1991) [2] . Both theories held that such judgments are based on relevant examples that people bring to mind. However, one theory was that people base their judgments on the number of examples they bring to mind and the other was that people base their judgments on how easily they bring those examples to mind. To test these theories, the researchers asked people to recall either six times when they were assertive (which is easy for most people) or 12 times (which is difficult for most people). Then they asked them to judge their own assertiveness. Note that the number-of-examples theory implies that people who recalled 12 examples should judge themselves to be more assertive because they recalled more examples, but the ease-of-examples theory implies that participants who recalled six examples should judge themselves as more assertive because recalling the examples was easier. Thus the two theories made opposite predictions so that only one of the predictions could be confirmed. The surprising result was that participants who recalled fewer examples judged themselves to be more assertive—providing particularly convincing evidence in favor of the ease-of-retrieval theory over the number-of-examples theory.

Theory Testing

The primary way that scientific researchers use theories is sometimes called the hypothetico-deductive method (although this term is much more likely to be used by philosophers of science than by scientists themselves). Researchers begin with a set of phenomena and either construct a theory to explain or interpret them or choose an existing theory to work with. They then make a prediction about some new phenomenon that should be observed if the theory is correct. Again, this prediction is called a hypothesis. The researchers then conduct an empirical study to test the hypothesis. Finally, they reevaluate the theory in light of the new results and revise it if necessary. This process is usually conceptualized as a cycle because the researchers can then derive a new hypothesis from the revised theory, conduct a new empirical study to test the hypothesis, and so on. As Figure \(\PageIndex{1}\) shows, this approach meshes nicely with the model of scientific research in psychology presented earlier in the textbook—creating a more detailed model of “theoretically motivated” or “theory-driven” research.

4.4.png

As an example, let us consider Zajonc’s research on social facilitation and inhibition. He started with a somewhat contradictory pattern of results from the research literature. He then constructed his drive theory, according to which being watched by others while performing a task causes physiological arousal, which increases an organism’s tendency to make the dominant response. This theory predicts social facilitation for well-learned tasks and social inhibition for poorly learned tasks. He now had a theory that organized previous results in a meaningful way—but he still needed to test it. He hypothesized that if his theory was correct, he should observe that the presence of others improves performance in a simple laboratory task but inhibits performance in a difficult version of the very same laboratory task. To test this hypothesis, one of the studies he conducted used cockroaches as subjects (Zajonc, Heingartner, & Herman, 1969) [3] . The cockroaches ran either down a straight runway (an easy task for a cockroach) or through a cross-shaped maze (a difficult task for a cockroach) to escape into a dark chamber when a light was shined on them. They did this either while alone or in the presence of other cockroaches in clear plastic “audience boxes.” Zajonc found that cockroaches in the straight runway reached their goal more quickly in the presence of other cockroaches, but cockroaches in the cross-shaped maze reached their goal more slowly when they were in the presence of other cockroaches. Thus he confirmed his hypothesis and provided support for his drive theory. (Zajonc also showed that drive theory existed in humans [Zajonc & Sales, 1966] [4] in many other studies afterward).

Incorporating Theory into Your Research

When you write your research report or plan your presentation, be aware that there are two basic ways that researchers usually include theory. The first is to raise a research question, answer that question by conducting a new study, and then offer one or more theories (usually more) to explain or interpret the results. This format works well for applied research questions and for research questions that existing theories do not address. The second way is to describe one or more existing theories, derive a hypothesis from one of those theories, test the hypothesis in a new study, and finally reevaluate the theory. This format works well when there is an existing theory that addresses the research question—especially if the resulting hypothesis is surprising or conflicts with a hypothesis derived from a different theory.

To use theories in your research will not only give you guidance in coming up with experiment ideas and possible projects, but it lends legitimacy to your work. Psychologists have been interested in a variety of human behaviors and have developed many theories along the way. Using established theories will help you break new ground as a researcher, not limit you from developing your own ideas.

There are three general characteristics of a good hypothesis. First, a good hypothesis must be testable and falsifiable . We must be able to test the hypothesis using the methods of science and if you’ll recall Popper’s falsifiability criterion, it must be possible to gather evidence that will disconfirm the hypothesis if it is indeed false. Second, a good hypothesis must be logical. As described above, hypotheses are more than just a random guess. Hypotheses should be informed by previous theories or observations and logical reasoning. Typically, we begin with a broad and general theory and use deductive reasoning to generate a more specific hypothesis to test based on that theory. Occasionally, however, when there is no theory to inform our hypothesis, we use inductive reasoning which involves using specific observations or research findings to form a more general hypothesis. Finally, the hypothesis should be positive. That is, the hypothesis should make a positive statement about the existence of a relationship or effect, rather than a statement that a relationship or effect does not exist. As scientists, we don’t set out to show that relationships do not exist or that effects do not occur so our hypotheses should not be worded in a way to suggest that an effect or relationship does not exist. The nature of science is to assume that something does not exist and then seek to find evidence to prove this wrong, to show that it really does exist. That may seem backward to you but that is the nature of the scientific method. The underlying reason for this is beyond the scope of this chapter but it has to do with statistical theory.

  • Zajonc, R. B. (1965). Social facilitation. Science, 149 , 269–274 ↵
  • Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991). Ease of retrieval as information: Another look at the availability heuristic. Journal of Personality and Social Psychology, 61 , 195–202. ↵
  • Zajonc, R. B., Heingartner, A., & Herman, E. M. (1969). Social enhancement and impairment of performance in the cockroach. Journal of Personality and Social Psychology, 13 , 83–92. ↵
  • Zajonc, R.B. & Sales, S.M. (1966). Social facilitation of dominant and subordinate responses. Journal of Experimental Social Psychology, 2 , 160-168. ↵

Module 2: Research Methods in Social Psychology

Module Overview

In Module 2 we will address the fact that psychology is the scientific study of behavior and mental processes. We will do this by examining the steps of the scientific method and describing the five major designs used in psychological research. We will also differentiate between reliability and validity and their importance for measurement. Psychology has very clear ethical standards and procedures for scientific research. We will discuss these but also why they are needed. Finally, psychology as a field, but especially social psychology as a subfield, is faced with a replication crisis and issues with the generalizability of its findings. These will be explained to close out the module.

Module Outline

2.1. The Scientific Method

2.2. research designs used by social psychologists, 2.3. reliability and validity, 2.4. research ethics, 2.5. issues in social psychology.

Module Learning Outcomes

  • Clarify what it means for psychology to be scientific by examining the steps of the scientific method and the three cardinal features of science.
  • Outline the five main research methods used in psychology and clarify how they are utilized in social psychology.
  • Differentiate and explain the concepts of reliability and validity.
  • Describe key features of research ethics.
  • Clarify the nature of the replication crisis in psychology and the importance of generalizability.

Section Learning Objectives

  • Define scientific method.
  • Outline and describe the steps of the scientific method, defining all key terms.
  • Identify and clarify the importance of the three cardinal features of science.

In Module 1, we learned that psychology was the scientific study of behavior and mental processes. We will spend quite a lot of time on the behavior and mental processes part, but before we proceed, it is prudent to elaborate more on what makes psychology scientific. In fact, it is safe to say that most people not within our discipline or a sister science, would be surprised to learn that psychology utilizes the scientific method at all.

So what is the scientific method? Simply, the scientific method is a systematic method for gathering knowledge about the world around us. The key word here is that it is systematic meaning there is a set way to use it. What is that way? Well, depending on what source you look at it can include a varying number of steps. For our purposes, the following will be used:

Table 2.1: The Steps of the Scientific Method

Science has at its root three cardinal features that we will see play out time and time again throughout this book, and as mentioned in Module 1. They are:

  • Observation – In order to know about the world around us we must be able to see it firsthand. In relation to social psychology, we know our friend and his parents pretty well, and so in our time with them have observed the influence they exert on his life.
  • Experimentation – To be able to make causal or cause and effect statements, we must be able to isolate variables. We have to manipulate one variable and see the effect of doing so on another variable. Experimentation is the primary method social psychology uses to test its hypotheses.
  • Measurement – How do we know whether or not our friend is truly securely attached to his parents? Well, simply we measure attachment. In order to do that, we could give our friend a short questionnaire asking about his attachment pattern to his parents. For this questionnaire, let’s say we use a 5-point scale for all questions (with 1 meaning the question does not apply to 5 meaning it definitely is true or matters). If there were 10 questions, then our friend would have a score between 10 and 50. The 10 would come from him answering every question with a 1 and the 50 from answering every question with a 5. If you are not aware, there are four main styles of attachment (secure, anxious-ambivalent, avoidant, and disorganized-disoriented). We would have 2-3 questions assessing each of the 4 styles meaning that if we had 2 questions for that style, the score would range from 2 to 10. If 3 questions, the range would be 3 to 15. The higher the score, the more likely the person exhibits that style to the parent and our friend should only have a high score on one of the four styles if our scale correctly assesses attachment. We will discuss reliability and validity in Section 2.3.
  • List the five main research methods used in psychology.
  • Describe observational research, listing its advantages and disadvantages.
  • Describe case study research, listing its advantages and disadvantages.
  • Describe survey research, listing its advantages and disadvantages.
  • Describe correlational research, listing its advantages and disadvantages.
  • Describe experimental research, listing its advantages and disadvantages.
  • State the utility and need for multimethod research.

Step 3 called on the scientist to test their hypothesis. Psychology as a discipline uses five main research designs. These include observational research, case studies, surveys, correlational designs, and experiments.

2.2.1. Observational Research

In terms of naturalistic observation , the scientist studies human or animal behavior in its natural environment which could include the home, school, or a forest. The researcher counts, measures, and rates behavior in a systematic way and at times uses multiple judges to ensure accuracy in how the behavior is being measured. This is called inter-rater reliability as you will see in Section 2.3. The advantage of this method is that you witness behavior as it occurs and it is not tainted by the experimenter. The disadvantage is that it could take a long time for the behavior to occur and if the researcher is detected then this may influence the behavior of those being observed. In the case of the latter, the behavior of the observed becomes artificial .

Laboratory observation involves observing people or animals in a laboratory setting. The researcher might want to know more about parent-child interactions and so brings a mother and her child into the lab to engage in preplanned tasks such as playing with toys, eating a meal, or the mother leaving the room for a short period of time. The advantage of this method over the naturalistic method is that the experimenter can use sophisticated equipment and videotape the session to examine it at a later time. The problem is that since the subjects know the experimenter is watching them, their behavior could become artificial from the start.

2.2.1.1. Example of an observational social psychology study. Griffiths (1991) studied the gambling behavior of adolescents by observing the clientele of 33 arcades in the UK. He used participant (when the researcher becomes an active participant in the group they are studying) and non-participant observation methodologies and found that adolescent gambling depended on the time of day and the time of year, and regular players had stereotypical behaviors and conformed to specific rules of etiquette. They played for fun, to win, to socialize, for excitement, and/or to escape.

2.2.2. Case Studies

Psychology can also utilize a detailed description of one person or a small group based on careful observation. This was the approach the founder of psychoanalysis, Sigmund Freud, took to develop his theories. The advantage of this method is that you arrive at a rich description of the behavior being investigated but the disadvantage is that what you are learning may be unrepresentative of the larger population and so lacks generalizability . Again, bear in mind that you are studying one person or a very small group. Can you possibly make conclusions about all people from just one or even five or ten? The other issue is that the case study is subject to the bias of the researcher in terms of what is included in the final write up and what is left out. Despite these limitations, case studies can lead us to novel ideas about the cause of behavior and help us to study unusual conditions that occur too infrequently to study with large sample sizes and in a systematic way. Though our field does make use of the case study methodology, social psychology does not frequently use the design.

2.2.2.1. Example of a case study from clinical psychology. In 1895, the book, Studies on Hysteria , was published by Josef Breuer (1842-1925) and Sigmund Freud (1856-1939), and marked the birth of psychoanalysis, though Freud did not use this actual term until a year later. The book published several case studies, including that of Anna O., born February 27, 1859 in Vienna to Jewish parents Siegmund and Recha Pappenheim, strict Orthodox adherents and considered millionaires at the time. Bertha, known in published case studies as Anna O., was expected to complete the formal education of a girl in the upper middle class which included foreign language, religion, horseback riding, needlepoint, and piano. She felt confined and suffocated in this life and took to a fantasy world she called her “private theater.” Anna also developed hysteria to include symptoms such as memory loss, paralysis, disturbed eye movements, reduced speech, nausea, and mental deterioration. Her symptoms appeared as she cared for her dying father and her mother called on Breuer to diagnose her condition (note that Freud never actually treated her). Hypnosis was used at first and relieved her symptoms. Breuer made daily visits and allowed her to share stories from her private theater which he came to call “talking cure” or “chimney sweeping.” Many of the stories she shared were actually thoughts or events she found troubling and reliving them helped to relieve or eliminate the symptoms. Breuer’s wife, Mathilde, became jealous of her husband’s relationship with the young girl, leading Breuer to terminate treatment in the June of 1882 before Anna had fully recovered. She relapsed and was admitted to Bellevue Sanatorium on July 1, eventually being released in October of the same year. With time, Anna O. did recover from her hysteria and went on to become a prominent member of the Jewish Community, involving herself in social work, volunteering at soup kitchens, and becoming ‘House Mother’ at an orphanage for Jewish girls in 1895. Bertha (Anna O.) became involved in the German Feminist movement, and in 1904 founded the League of Jewish Women. She published many short stories; a play called Women’s Rights , in which she criticized the economic and sexual exploitation of women, and wrote a book in 1900 called The Jewish Problem in Galicia , in which she blamed the poverty of the Jews of Eastern Europe on their lack of education. In 1935 she was diagnosed with a tumor and was summoned by the Gestapo in 1936 to explain anti-Hitler statements she had allegedly made. She died shortly after this interrogation on May 28, 1936. Freud considered the talking cure of Anna O. to be the origin of psychoanalytic therapy and what would come to be called the cathartic method.

To learn more about observational and case study designs, please take a look at our Research Methods in Psychology textbook by visiting:

https://opentext.wsu.edu/carriecuttler/chapter/observational-research/

For more on Anna O., please see:

https://www.psychologytoday.com/blog/freuds-patients-serial/201201/bertha-pappenheim-1859-1936

2.2.3. Surveys/Self-Report Data

A survey is a questionnaire consisting of at least one scale with some number of questions which assess a psychological construct of interest such as parenting style, depression, locus of control, attitudes, or sensation seeking behavior. It may be administered by paper and pencil or computer. Surveys allow for the collection of large amounts of data quickly but the actual survey could be tedious for the participant and social desirability , when a participant answers questions dishonestly so that he/she is seen in a more favorable light, could be an issue. For instance, if you are asking high school students about their sexual activity they may not give genuine answers for fear that their parents will find out. Or if you wanted to know about prejudicial attitudes of a group of people, you could use the survey method. You could alternatively gather this information via an interview in a structured or unstructured fashion. Important to survey research is that you have random sampling or when everyone in the population has an equal chance of being included in the sample. This helps the survey to be representative of the population and in terms of key demographic variables such as gender, age, ethnicity, race, education level, and religious orientation.

To learn more about the survey research design, please take a look at our Research Methods in Psychology textbook by visiting:

https://opentext.wsu.edu/carriecuttler/chapter/7-1-overview-of-survey-research/

2.2.4. Correlational Research

This research method examines the relationship between two variables or two groups of variables. A numerical measure of the strength of this relationship is derived, called the correlation coefficient , and can range from -1.00, a perfect inverse relationship meaning that as one variable goes up the other goes down, to 0 or no relationship at all, to +1.00 or a perfect relationship in which as one variable goes up or down so does the other. In terms of a negative correlation we might say that as a parent becomes more rigid, controlling, and cold, the attachment of the child to the parent goes down. In contrast, as a parent becomes warmer, more loving, and provides structure, the child becomes more attached. The advantage of correlational research is that you can correlate anything. The disadvantage is that you can correlate anything. Variables that really do not have any relationship to one another could be viewed as related. Yes. This is both an advantage and a disadvantage. For instance, we might correlate instances of making peanut butter and jelly sandwiches with someone we are attracted to sitting near us at lunch. Are the two related? Not likely, unless you make a really good PB&J but then the person is probably only interested in you for food and not companionship. The main issue here is that correlation does not allow you to make a causal statement.

To learn more about the correlational research design, please take a look at our Research Methods in Psychology textbook by visiting:

https://opentext.wsu.edu/carriecuttler/chapter/correlational-research/

2.2.5. Example of a Study Using Survey and Correlational Designs

Roccas, Sagiv, Schwartz, and Knafo (2002) examined the relationship of the big five personality traits and values by administering the Schwartz (1992) Values survey, NEO-PI, a positive affect scale, and a single item assessing religiosity to introductory to psychology students at an Israeli university. For Extraversion, it was found that values that define activity, challenge, excitement, and pleasure as desirable goals in life (i.e. stimulation, hedonism, and achievement) were important while valuing self-denial or self-abnegation, expressed in traditional values, was antithetical.

For Openness, values that emphasize intellectual and emotional autonomy, acceptance and cultivation of diversity, and pursuit of novelty and change (i.e. universalism, self-direction, and stimulation) were important while conformity, security, and tradition values were incompatible. Benevolence, tradition, and to a lesser degree conformity, were important for Agreeableness while power and achievement correlated negatively. In terms of Conscientiousness (C), there was a positive correlation with security values as both share the goal of maintaining smooth interpersonal relations and avoiding disruption of social order and there was a negative correlation with stimulation, indicating an avoidance of risk as a motivator of C.

Finally, there was little association of values with the domain of Neuroticism but a closer inspection of the pattern of correlations with the facets of N suggests two components. First, the angry hostility and impulsiveness facets could be called extrapunitive since the negative emotion is directed outward and tends to correlate positively with hedonism and stimulation values and negatively with benevolence, tradition, conformity, and C values. Second, the anxiety, depression, self-consciousness, and vulnerability facets could be called intrapunitive since the negative emotion is directed inward. This component tends to correlate positively with tradition values and negatively with achievement and stimulation values.

2.2.6. Experiments

An experiment is a controlled test of a hypothesis in which a researcher manipulates one variable and measures its effect on another variable. The variable that is manipulated is called the independent variable (IV) and the one that is measured is called the dependent variable (DV) . A common feature of experiments is to have a control group that does not receive the treatment or is not manipulated and an experimental group that does receive the treatment or manipulation. If the experiment includes random assignment participants have an equal chance of being placed in the control or experimental group. The control group allows the researcher to make a comparison to the experimental group, making a causal statement possible, and stronger.

2.2.6.1. Example of an experiment.    Allison and Messick (1990) led subjects to believe they were the first of six group members to take points from a common resource pool and that they could take as many points as desired which could later be exchanged for cash. Three variables were experimentally manipulated. First, subjects in the low payoff condition were led to believe the pool was only 18 or 21 points in size whereas those in the high payoff condition were told the pool consisted of either 24 or 27 points. Second, the pools were divisible (18 and 24) or nondivisible (21 or 27). Third, half of the subjects were placed in the fate control condition and told that if the requests from the six group members exceeded the pool size, then no one could keep any points, while the other half were in the no fate control condition and told there would be no penalties for overconsumption of the pool.  Finally, data for a fourth variable, social values, was collected via questionnaire four weeks prior to participation. In all, the study employed a 2 (fate control) x 2 (payoff size) x 2 (divisibility) x 2 (social values) between-subjects factorial design.

Results showed that subjects took the least number of points from the resource pool when the resource was divisible, the payoffs were low, and there was no fate control. On the other hand, subjects took the most points when the resource was nondivisible, the payoffs were high, and subjects were noncooperative. To further demonstrate this point, Allison and Messick (1990) counted the number of inducements to which participants were exposed. This number ranged from 0 to 4 inducements. Subjects took between one-fifth and one-fourth when there were one or two inducements, took about one-third when there were three inducements, and about half of the pool when all four were present. They state that an equal division rule was used when there were no temptations to violate equality but as the number of temptations increased, subjects became progressively more likely to overconsume the pool. The authors conclude that the presence of competing cues/factors tends to invite the use of self-serving rules to include “First-come, first-served” and “People who get to go first take more.”

To learn more about the experimental research design, please take a look at our Research Methods in Psychology textbook by visiting:

https://opentext.wsu.edu/carriecuttler/chapter/experiment-basics/

2.2.7. Multi-Method Research

As you have seen above, no single method alone is perfect. All have their strengths and limitations. As such, for the psychologist to provide the clearest picture of what is affecting behavior or mental processes, several of these approaches are typically employed at different stages of the research process. This is called multi-method research.

2.2.8. Archival Research

Another technique used by psychologists is called archival research or when the researcher analyzes data that has already been collected and for another purpose. For instance, a researcher may request data from high schools about a student’s GPA and their SAT and/or ACT score(s) and then obtain their four-year GPA from the university they attended. This can be used to make a prediction about success in college and which measure – GPA or standardized test score – is the better predictor.

2.2.9. Meta-Analysis

Meta-analysis is a statistical procedure that allows a researcher to combine data from more than one study. For example, Shariff et al. (2015) published an article on religious priming and prosociality in Personality and Social Psychology Review . The authors used effect-size analyses, p- curve analyses, and adjustments for publication bias (no worries, you don’t have to understand any of that), to evaluate the robustness of four types of religious priming, how religion affects prosocial behavior, and whether religious-priming effects generalize to those who are loosely or not religious at all. Results were presented across 93 studies and 11,653 participants and showed that religious priming has robust effects in relation to a variety of outcome measures, prosocial behavior included. It did not affect non-religious people though.

2.2.10. Communicating Results

In scientific research, it is common practice to communicate the findings of our investigation. By reporting what we found in our study other researchers can critique our methodology and address our limitations. Publishing allows psychology to grow its knowledge base about human behavior. We can also see where gaps still exist. We move it into the public domain so others can read and comment on it. Scientists can also replicate what we did and possibly extend our work if it is published.

There are several ways to communicate our findings. We can do so at conferences in the form of posters or oral presentations, through newsletters from APA itself or one of its many divisions or other organizations, or through research journals and specifically scientific research articles. Published journal articles represent a form of communication between scientists and in them, the researchers describe how their work relates to previous research, how it replicates and/or extends this work, and what their work might mean theoretically.

Research articles begin with an abstract or a 150-250 word summary of the entire article. The purpose is to describe the experiment and allows the reader to make a decision about whether he or she wants to read it further. The abstract provides a statement of purpose, overview of the methods, main results, and a brief statement of the conclusion. Keywords are also given that allow for students and other researchers alike to find the article when doing a search.

The abstract is followed by four major sections as described:

  • Introduction – The first section is designed to provide a summary of the current literature as it relates to your topic. It helps the reader to see how you arrived at your hypothesis and the design of your study. Essentially, it gives the logic behind the decisions you made. You also state the purpose and share your predictions or hypothesis.
  • Method – Since replication is a required element of science, we must have a way to share information on our design and sample with readers. This is the essence of the method section and covers three major aspects of your study – your participants, materials or apparatus, and procedure. The reader needs to know who was in your study so that limitations related to generalizability of your findings can be identified and investigated in the future. You will also state your operational definition, describe any groups you used, random sampling or assignment procedures, information about how a scale was scored, etc. Think of the Method section as a cookbook. The participants are your ingredients, the materials or apparatus are whatever tools you will need, and the procedure is the instructions for how to bake the cake.
  • Results – In this section you state the outcome of your experiment and whether they were statistically significant or not. You can also present tables and figures.
  • Discussion – In this section you start by restating the main findings and hypothesis of the study. Next, you offer an interpretation of the findings and what their significance might be. Finally, you state strengths and limitations of the study which will allow you to propose future directions.

Whether you are writing a research paper for a class or preparing an article for publication, or reading a research article, the structure and function of a research article is the same. Understanding this will help you when reading social psychological articles.

  • Clarify why reliability and validity are important.
  • Define reliability and list and describe forms it takes.
  • Define validity and list and describe forms it takes.

Recall that measurement involves the assignment of scores to an individual which are used to represent aspects of the individual such as how conscientious they are or their level of depression. Whether or not the scores actually represent the individual is what is in question. Cuttler (2017) says in her book Research Methods in Psychology, “Psychologists do not simply  assume  that their measures work. Instead, they collect data to demonstrate  that they work. If their research does not demonstrate that a measure works, they stop using it.” So how do they demonstrate that a measure works? This is where reliability and validity come in.

2.3.1. Reliability

First, reliability describes how consistent a measure is. It can be measured in terms of test-retest reliability , or how reliable the measure is across time, internal consistency , or the “consistency of people’s responses across the items on multiple-item measures,” (Cuttler, 2017), and finally inter-rater reliability , or how consistent different observers are when making judgments. In terms of inter-rater reliability, Cuttler (2017) writes, “Inter-rater reliability would also have been measured in Bandura’s Bobo doll study. In this case, the observers’ ratings of how many acts of aggression a particular child committed while playing with the Bobo doll should have been highly positively correlated.”

2.3.2. Validity

A measure is considered to be valid if its scores represent the variable it is said to measure. For instance, if a scale says it measures depression, and it does, then we can say it is valid. Validity can take many forms. First, face validity is “the extent to which a measurement method appears “on its face” to measure the construct of interest” (Cuttler, 2017). A scale purported to measure values should have questions about values such as benevolence, conformity, and self-direction, and not questions about depression or attitudes toward toilet paper.

Content validity is to what degree a measure covers the construct of interest. Cuttler (2017) says, “… consider that attitudes are usually defined as involving thoughts, feelings, and actions toward something. By this conceptual definition, a person has a positive attitude toward exercise to the extent that he or she thinks positive thoughts about exercising, feels good about exercising, and actually exercises.”

Oftentimes, we expect a person’s scores on one measure to be correlated with scores on another measure that we expect it to be related to, called criterion validity . For instance, consider parenting style and attachment. We would expect that if a person indicates on one scale that their father was authoritarian (or dictatorial) then attachment would be low or insecure. In contrast, if the mother was authoritative (or democratic) we would expect the child to show a secure attachment style.

As researchers we expect that our results will generalize from our sample to the larger population. This was the issue with case studies as the sample is too small to make conclusions about everyone. If our results do generalize from the circumstances under which our study was conducted to similar situations, then we can say our study has external validity . External validity is also affected by how real the research is. Two types of realism are possible. First, mundane realism occurs when the research setting closely resembles the real world setting. Experimental realism is the degree to which the experimental procedures that are used feel real to the participant. It does not matter if they really mirror real life but that they only appear real to the participant. If so, his or her behavior will be more natural and less artificial.

In contrast, a study is said to have good internal validity when we can confidently say that the effect on the dependent variable (the one that is measured) was due solely to our manipulation or the independent variable. A confound occurs when a factor other than the independent variable leads to changes in the dependent variable.

To learn more about reliability and validity, please visit: https://opentext.wsu.edu/carriecuttler/chapter/reliability-and-validity-of-measurement/

  • Exemplify instances of ethical misconduct in research.
  • List and describe principles of research ethics.

Throughout this module so far, we have seen that it is important for researchers to understand the methods they are using. Equally important, they must understand and appreciate ethical standards in research. The American Psychological Association identifies high standards of ethics and conduct as one of its four main guiding principles or missions. To read about the other three, please visit https://www.apa.org/about/index.aspx . So why are ethical standards needed and what do they look like?

2.4.1. Milgram’s Study on Learning…or Not

Possibly, the one social psychologist students know about the most is Stanley Milgram, if not by name, then by his study on obedience using shock (Milgram, 1974). Essentially, two individuals came to each experimental session but only one of these two individuals was a participant. The other was what is called a confederate and is part of the study without the participant knowing. The confederate was asked to pick heads or tails and then a coin was flipped. As you might expect, the confederate always won and chose to be the learner . The “experimenter,” who was also a confederate, took him into one room where he was hooked up to wires and electrodes. This was done while the “teacher,” the actual participant, watched and added to the realism of what was being done. The teacher was then taken into an adjacent room where he was seated in front of a shock generator. The teacher was told it was his task to read a series of word pairs to the learner. Upon completion of reading the list, he would ask the learner one of the two words and it was the learner’s task to state what the other word in the pair was. If the learner incorrectly paired any of the words, he would be shocked. The shock generator started at 30 volts and increased in 15-volt increments up to 450 volts. The switches were labeled with terms such as “Slight shock,” “Moderate shock,” “Danger: Severe Shock,” and the final two switches were ominously labeled “XXX.”

As the experiment progressed, the teacher would hear the learner scream, holler, plead to be released, complain about a heart condition, or say nothing at all. When the learner stopped replying, the teacher would turn to the experimenter and ask what to do, to which the experimenter indicated for him to treat nonresponses as incorrect and shock the learner. Most participants asked the experimenter whether they should continue at various points in the experiment. The experimenter issued a series of commands to include, “Please continue,” “It is absolutely essential that you continue,” and “You have no other choice, you must go on.”

Any guesses as to what happened? What percent of the participants would you hypothesize actually shocked the learner to death? Milgram found that 65 percent of participants/teachers shocked the learner to the XXX switches which would have killed him. Why? They were told to do so. How do you think the participant felt when they realized that they could kill someone simply because they were told to do so?

Source: Milgram, S. (1974). Obedience to authority. New York, NY: Harper Perennial.

2.4.2. GO TO JAIL:  Go Directly to Jail. Do Not Pass Go. Do Not Collect $200

Early in the morning on Sunday, August 14, 1971, a Palo Alto, CA police car began arresting college students for committing armed robbery and burglary. Each suspect was arrested at his home, charged, read his Miranda rights, searched, handcuffed, and placed in the back of the police car as neighbors watched. At the station, the suspect was booked, read his rights again, and identified. He was then placed in a cell. How were these individuals chosen? Of course, they did not really commit the crimes they were charged with. The suspects had answered a newspaper ad requesting volunteers for a study of the psychological effects of prison life.

After screening individuals who applied to partake in the study, a final group of 24 were selected. These individuals did not have any psychological problems, criminal record, history of drug use, or mental disorder. They were paid $15 for their participation. The participants were divided into two groups through a flip of a coin. One half became the prison guards and the other half the prisoners. The prison was constructed by boarding up each end of a corridor in the basement of Stanford University’s Psychology building. This space was called “The Yard” and was the only place where the prisoners were permitted to walk, exercise, and eat. Prison cells were created by removing doors from some of the labs and replacing them with specially made doors with steel bars and cell numbers. A small closet was used for solitary confinement and was called “The Hole.” There were no clocks or windows in the prison and an intercom was used to make announcements to all prisoners. The suspects who were arrested were transported to “Stanford County Jail” to be processed. It was there they were greeted by the warden and told what the seriousness of their crime was. They were stripped searched and deloused, and the process was made to be intentionally degrading and humiliating. They were given uniforms with a prison ID number on it. This number became the only way they were referred to during their time. A heavy chain was placed on each prisoner’s right ankle which served the purpose of reminding them of how oppressive their environment was.

The guards were given no training and could do what they felt was necessary to maintain order and command the respect of the prisoners. They made their own set of rules and were supervised by the warden, who was played by another student at Stanford. Guards were dressed in identical uniforms, carried a whistle, held a billy club, and wore special mirror sun-glasses so no one could see their eyes or read their emotions. Three guards were assigned to each of the three hour shifts and supervised the nine prisoners. At 2:30 am they would wake the prisoners to take counts. This provided an opportunity to exert control and to get a feel for their role. Similarly, prisoners had to figure out how they were to act and at first, tried to maintain their independence. As you might expect, this led to confrontations between the prisoners and the guards resulting in the guards physically punishing the prisoners with push-ups.

The first day was relatively quiet, but on the second day, a rebellion broke out in which prisoners removed their caps, ripped off their numbers, and put their beds against their cell doors creating a barricade. The guards responded by obtaining a fire extinguisher and shooting a stream of the cold carbon dioxide solution at the prisoners. The cells were then broken into, the prisoners stripped, beds removed, ringleaders put into solitary confinement, and a program of harassment and intimidation of the remaining inmates began. Since 9 guards could not be on duty at all times to maintain order, a special “privilege cell” was established and the three prisoners least involved in the rebellion were allowed to stay in it. They were given their beds and uniforms back, could brush their teeth and take a bath, and were allowed to eat special food in the presence of the other six prisoners. This broke the solidarity among the prisoners.

Less than 36 hours after the study began a prisoner began showing signs of uncontrollable crying, acute emotional disturbance, rage, and disorganized thinking. Though his emotional problems were initially seen as an attempt to gain release which resulted in his being returned to the prison and used as an informant, the symptoms worsened and he had to be released from the study. Then there was the rumor of a mass escape by the prisoners which the guards worked to foil. When it was revealed that the prisoners were never actually going to attempt the prison break, the guards became very frustrated and made the prisoners engage in menial work, pushups, jumping jacks, and anything else humiliating that they could think of.

A Catholic priest was invited to evaluate how realistic the prison was. Each prisoner was interviewed individually and most introduced himself to the priest by his prison number and not his name. He offered to help them obtain a lawyer and some accepted. One prisoner was feeling ill (#819) and did not meet with the priest right away. When he did, he broke down and began to cry. He was quickly taken to another room and all prison garments taken off. While this occurred, the guards lined up the other prisoners and broke them out into a chant of “Prisoner #819 is a bad prisoner. Because of what Prisoner #819 did, my cell is a mess. Mr. Correctional Officer.” This further upset the prisoner and he was encouraged to leave, though he refused each time. He finally did agree to leave after the researcher (i.e. Zimbardo) told him what he was undergoing was just a research study and not really prison. The next day parole hearings were held and prisoners who felt they deserved to be paroled were interviewed one at a time. Most, when asked if they would give up the money they were making for their participation so they could leave, said yes.

In all, the study lasted just six days. Zimbardo noted that three types of guards emerged—tough but fair who followed the prison rules; “good guys” who never punished the prisoners and did them little favors; and finally those who were hostile, inventive in their employment of punishment, and who truly enjoyed the power they had. As for the prisoners, they coped with the events in the prison in different ways. Some fought back, others broke down emotionally, one developed a rash over his entire body, and some tried to be good prisoners and do all that the guards asked of them. No matter what strategy they used early on, by the end of the study they all were disintegrated as a group, and as individuals. The guards commanded blind obedience from all of the prisoners.

When asked later why he ended the study, Zimbardo cited two reasons. First, it became apparent that the guards were escalating their abuse of the prisoners in the middle of the night when they thought no one was watching. Second, Christina Maslach, a recent Stanford Ph.D. was asked to conduct interviews with the guards and prisoners and saw the prisoners being marched to the toilet with bags on their heads and legs chained together. She was outraged and questioned the study’s morality.

Source: http://www.prisonexp.org/

If you would like to learn more about the moral foundations of ethical research, please visit: https://opentext.wsu.edu/carriecuttler/chapter/moral-foundations-of-ethical-research/

2.4.3. Ethical Guidelines

Due to these studies, and others, the American Psychological Association (APA) established guiding principles for conducting psychological research. The principles can be broken down in terms of when they should occur during the process of a person participating in the study.

2.4.3.1. Before participating. First, researchers must obtain informed consent or when the person agrees to participate because they are told what will happen to them. They are given information about any risks they face, or potential harm that could come to them, whether physical or psychological. They are also told about confidentiality or the person’s right not to be identified. Since most research is conducted with students taking introductory psychology courses, they have to be given the right to do something other than a research study to likely earn required credits for the class. This is called an alternative activity and could take the form of reading and summarizing a research article. The amount of time taken to do this should not exceed the amount of time the student would be expected to participate in a study.

2.4.3.2. While participating. Participants are afforded the ability to withdraw or the person’s right to exit the study if any discomfort is experienced.

2.4.3.3. After participating . Once their participation is over, participants should be debriefed or when the true purpose of the study is revealed and they are told where to go if they need assistance and how to reach the researcher if they have questions. So can researchers deceive participants, or intentionally withhold the true purpose of the study from them? According to the APA, a minimal amount of deception is allowed.

Human research must be approved by an Institutional Review Board or IRB. It is the IRB that will determine whether the researcher is providing enough information for the participant to give consent that is truly informed, if debriefing is adequate, and if any deception is allowed or not.

If you would like to learn more about how to use ethics in your research, please read: https://opentext.wsu.edu/carriecuttler/chapter/putting-ethics-into-practice/

  • Describe the replication crisis in psychology.
  • Describe the issue with generalizability faced by social psychologists.

2.5.1. The Replication Crisis in Social Psychology

Today, the field of psychology faces what is called a replication crisis. Simply, published findings in psychology are not replicable, one of the hallmarks of science. Swiatkowski and Dompnier (2017) addressed this issue but with a focus on social psychology. They note that the field faces a confidence crisis due to events such as Diederick Staple intentionally fabricating data over a dozen years which lead to the retraction of over 50 published papers. They cite a study by John et al. (2012) in which 56% of 2,155 respondents admitted to collecting more data after discovering that the initial statistical test was not significant and 46% selectively reported studies that “worked” in a paper to be published. They also note that Nuijten et al. (2015) collected a sample of over 30,000 articles from the top 8 psychology journals and found that 1 in 8 possibly had an inconsistent p value that could have affected the conclusion the researchers drew.

So, how extensive is the issue? The Psychology Reproducibility Project was started to determine to what degree psychological effects from the literature could be replicated. One hundred published studies were attempted to be replicated by independent research teams and from different subfields in psychology. Only 39% of the findings were considered to be successfully replicated. For social psychology the results were worse. Only 25% were replicated.

Why might a study not replicate? Swiatkowski and Dompnier (2017) cite a few reasons. First, they believe that statistical power, or making the decision to not reject the null hypothesis (H0 – hypothesis stating that there is no effect or your hypothesis was not correct) when it is actually false, is an issue in social psychology. Many studies are underpowered as shown by small effect sizes observed in the field, which inflates the rate of false-positive findings and leads to unreplicable findings.

Second, they say that some researchers use “unjustifiable flexibility in data analysis, such as working with several undisclosed dependent variables, collecting more observations after initial hypothesis testing, stopping data collection earlier than planned because of a statistically significant predicted finding, controlling for gender effects a posterior, dropping experimental conditions, and so on” (pg. 114). Some also do undisclosed multiple testing without making adjustments, called p-hacking, or dropping observations to achieve a significance level, called cherry picking . Such practices could explain the high prevalence of false positives in social psychological research.

Third, some current publication standards may promote bad research practices in a few ways. Statistical significance has been set at p = 0.05 as the sine qua non condition for publication. According to Swiattkowski and Dompnier (2017) this leads to dichotomous thinking in terms of the “strict existence and non-existence of an effect” (pg. 115). Also, positive, statistically significant results are more likely to be published than negative, statistically, non-significant results which can be hard to interpret. This bias leads to a structural incentive to seek out positive results. Finally, the authors point out that current editorial standards show a preference for novelty or accepting studies which report new and original psychological effects. This reduces the importance of replications which lack prestige and inspire little interest among researchers. It should also be pointed out that there is a mentality of ‘Publish or perish’ at universities for full time faculty. Those who are prolific and publish often are rewarded with promotions, pay raises, tenure, or prestigious professorships. Also, studies that present highly novel and cool findings are showcased by the media.

The authors state, “In the long run, the lack of a viable falsification procedure seriously undermines the quality of scientific knowledge psychology produces. Without a way to build a cumulative net of well-tested theories and to abandon those that are false, social psychology risks ending up with a confused mixture of both instead”(pg. 117).

For more on this issue, check out the following articles

  • 2016 Article in the Atlantic – https://www.theatlantic.com/science/archive/2016/03/psychologys-replication-crisis-cant-be-wished-away/472272/
  • 2018 Article in The Atlantic – https://www.theatlantic.com/science/archive/2018/11/psychologys-replication-crisis-real/576223/
  • 2018 Article in the Washington Post – https://www.washingtonpost.com/news/speaking-of-science/wp/2018/08/27/researchers-replicate-just-13-of-21-social-science-experiments-published-in-top-journals/?noredirect=on&utm_term=.2a05aff2d7de
  • 2018 Article from Science News – https://www.sciencenews.org/blog/science-public/replication-crisis-psychology-science-studies-statistics

2.5.2. Generalizability

Earlier we discussed how researchers want to generalize their findings from the sample to the population, or from a small, representative group to everyone. The problem that plagues social psychology is who makes up our samples. Many social psychological studies are conducted with college students working for course credit (Sears, 1986). They represent what is called a convenience sample . Can we generalize from college students to the larger group?

Module Recap

In Module 1 we stated that psychology studied behavior and mental processes using the strict standards of science. In Module 2 we showed you how that is done via adoption of the scientific method and use of the research designs of observation, case study, surveys, correlation, and experiments. To make sure our measurement of a variable is sound, we need to have measures that are reliable and valid. And to give our research legitimacy we have to use clear ethical standards for research to include gaining informed consent from participants, telling them of the risks, giving them the right to withdraw, debriefing them, and using nothing more than minimal deception. Despite all this, psychology faces a crisis in which many studies are not replicating and findings from some social psychological research are not generalizable to the population.

This concludes Part I of the book. In Part II we will discuss how we think about ourselves and others. First, we will tackle the self and then move to the perception of others. Part II will conclude with a discussion of attitudes.

2nd edition

Creative Commons License

Share This Book

  • Increase Font Size
  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Quantitative Methods
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Quantitative methods emphasize objective measurements and the statistical, mathematical, or numerical analysis of data collected through polls, questionnaires, and surveys, or by manipulating pre-existing statistical data using computational techniques . Quantitative research focuses on gathering numerical data and generalizing it across groups of people or to explain a particular phenomenon.

Babbie, Earl R. The Practice of Social Research . 12th ed. Belmont, CA: Wadsworth Cengage, 2010; Muijs, Daniel. Doing Quantitative Research in Education with SPSS . 2nd edition. London: SAGE Publications, 2010.

Need Help Locating Statistics?

Resources for locating data and statistics can be found here:

Statistics & Data Research Guide

Characteristics of Quantitative Research

Your goal in conducting quantitative research study is to determine the relationship between one thing [an independent variable] and another [a dependent or outcome variable] within a population. Quantitative research designs are either descriptive [subjects usually measured once] or experimental [subjects measured before and after a treatment]. A descriptive study establishes only associations between variables; an experimental study establishes causality.

Quantitative research deals in numbers, logic, and an objective stance. Quantitative research focuses on numeric and unchanging data and detailed, convergent reasoning rather than divergent reasoning [i.e., the generation of a variety of ideas about a research problem in a spontaneous, free-flowing manner].

Its main characteristics are :

  • The data is usually gathered using structured research instruments.
  • The results are based on larger sample sizes that are representative of the population.
  • The research study can usually be replicated or repeated, given its high reliability.
  • Researcher has a clearly defined research question to which objective answers are sought.
  • All aspects of the study are carefully designed before data is collected.
  • Data are in the form of numbers and statistics, often arranged in tables, charts, figures, or other non-textual forms.
  • Project can be used to generalize concepts more widely, predict future results, or investigate causal relationships.
  • Researcher uses tools, such as questionnaires or computer software, to collect numerical data.

The overarching aim of a quantitative research study is to classify features, count them, and construct statistical models in an attempt to explain what is observed.

  Things to keep in mind when reporting the results of a study using quantitative methods :

  • Explain the data collected and their statistical treatment as well as all relevant results in relation to the research problem you are investigating. Interpretation of results is not appropriate in this section.
  • Report unanticipated events that occurred during your data collection. Explain how the actual analysis differs from the planned analysis. Explain your handling of missing data and why any missing data does not undermine the validity of your analysis.
  • Explain the techniques you used to "clean" your data set.
  • Choose a minimally sufficient statistical procedure ; provide a rationale for its use and a reference for it. Specify any computer programs used.
  • Describe the assumptions for each procedure and the steps you took to ensure that they were not violated.
  • When using inferential statistics , provide the descriptive statistics, confidence intervals, and sample sizes for each variable as well as the value of the test statistic, its direction, the degrees of freedom, and the significance level [report the actual p value].
  • Avoid inferring causality , particularly in nonrandomized designs or without further experimentation.
  • Use tables to provide exact values ; use figures to convey global effects. Keep figures small in size; include graphic representations of confidence intervals whenever possible.
  • Always tell the reader what to look for in tables and figures .

NOTE:   When using pre-existing statistical data gathered and made available by anyone other than yourself [e.g., government agency], you still must report on the methods that were used to gather the data and describe any missing data that exists and, if there is any, provide a clear explanation why the missing data does not undermine the validity of your final analysis.

Babbie, Earl R. The Practice of Social Research . 12th ed. Belmont, CA: Wadsworth Cengage, 2010; Brians, Craig Leonard et al. Empirical Political Analysis: Quantitative and Qualitative Research Methods . 8th ed. Boston, MA: Longman, 2011; McNabb, David E. Research Methods in Public Administration and Nonprofit Management: Quantitative and Qualitative Approaches . 2nd ed. Armonk, NY: M.E. Sharpe, 2008; Quantitative Research Methods. Writing@CSU. Colorado State University; Singh, Kultar. Quantitative Social Research Methods . Los Angeles, CA: Sage, 2007.

Basic Research Design for Quantitative Studies

Before designing a quantitative research study, you must decide whether it will be descriptive or experimental because this will dictate how you gather, analyze, and interpret the results. A descriptive study is governed by the following rules: subjects are generally measured once; the intention is to only establish associations between variables; and, the study may include a sample population of hundreds or thousands of subjects to ensure that a valid estimate of a generalized relationship between variables has been obtained. An experimental design includes subjects measured before and after a particular treatment, the sample population may be very small and purposefully chosen, and it is intended to establish causality between variables. Introduction The introduction to a quantitative study is usually written in the present tense and from the third person point of view. It covers the following information:

  • Identifies the research problem -- as with any academic study, you must state clearly and concisely the research problem being investigated.
  • Reviews the literature -- review scholarship on the topic, synthesizing key themes and, if necessary, noting studies that have used similar methods of inquiry and analysis. Note where key gaps exist and how your study helps to fill these gaps or clarifies existing knowledge.
  • Describes the theoretical framework -- provide an outline of the theory or hypothesis underpinning your study. If necessary, define unfamiliar or complex terms, concepts, or ideas and provide the appropriate background information to place the research problem in proper context [e.g., historical, cultural, economic, etc.].

Methodology The methods section of a quantitative study should describe how each objective of your study will be achieved. Be sure to provide enough detail to enable the reader can make an informed assessment of the methods being used to obtain results associated with the research problem. The methods section should be presented in the past tense.

  • Study population and sampling -- where did the data come from; how robust is it; note where gaps exist or what was excluded. Note the procedures used for their selection;
  • Data collection – describe the tools and methods used to collect information and identify the variables being measured; describe the methods used to obtain the data; and, note if the data was pre-existing [i.e., government data] or you gathered it yourself. If you gathered it yourself, describe what type of instrument you used and why. Note that no data set is perfect--describe any limitations in methods of gathering data.
  • Data analysis -- describe the procedures for processing and analyzing the data. If appropriate, describe the specific instruments of analysis used to study each research objective, including mathematical techniques and the type of computer software used to manipulate the data.

Results The finding of your study should be written objectively and in a succinct and precise format. In quantitative studies, it is common to use graphs, tables, charts, and other non-textual elements to help the reader understand the data. Make sure that non-textual elements do not stand in isolation from the text but are being used to supplement the overall description of the results and to help clarify key points being made. Further information about how to effectively present data using charts and graphs can be found here .

  • Statistical analysis -- how did you analyze the data? What were the key findings from the data? The findings should be present in a logical, sequential order. Describe but do not interpret these trends or negative results; save that for the discussion section. The results should be presented in the past tense.

Discussion Discussions should be analytic, logical, and comprehensive. The discussion should meld together your findings in relation to those identified in the literature review, and placed within the context of the theoretical framework underpinning the study. The discussion should be presented in the present tense.

  • Interpretation of results -- reiterate the research problem being investigated and compare and contrast the findings with the research questions underlying the study. Did they affirm predicted outcomes or did the data refute it?
  • Description of trends, comparison of groups, or relationships among variables -- describe any trends that emerged from your analysis and explain all unanticipated and statistical insignificant findings.
  • Discussion of implications – what is the meaning of your results? Highlight key findings based on the overall results and note findings that you believe are important. How have the results helped fill gaps in understanding the research problem?
  • Limitations -- describe any limitations or unavoidable bias in your study and, if necessary, note why these limitations did not inhibit effective interpretation of the results.

Conclusion End your study by to summarizing the topic and provide a final comment and assessment of the study.

  • Summary of findings – synthesize the answers to your research questions. Do not report any statistical data here; just provide a narrative summary of the key findings and describe what was learned that you did not know before conducting the study.
  • Recommendations – if appropriate to the aim of the assignment, tie key findings with policy recommendations or actions to be taken in practice.
  • Future research – note the need for future research linked to your study’s limitations or to any remaining gaps in the literature that were not addressed in your study.

Black, Thomas R. Doing Quantitative Research in the Social Sciences: An Integrated Approach to Research Design, Measurement and Statistics . London: Sage, 1999; Gay,L. R. and Peter Airasain. Educational Research: Competencies for Analysis and Applications . 7th edition. Upper Saddle River, NJ: Merril Prentice Hall, 2003; Hector, Anestine. An Overview of Quantitative Research in Composition and TESOL . Department of English, Indiana University of Pennsylvania; Hopkins, Will G. “Quantitative Research Design.” Sportscience 4, 1 (2000); "A Strategy for Writing Up Research Results. The Structure, Format, Content, and Style of a Journal-Style Scientific Paper." Department of Biology. Bates College; Nenty, H. Johnson. "Writing a Quantitative Research Thesis." International Journal of Educational Science 1 (2009): 19-32; Ouyang, Ronghua (John). Basic Inquiry of Quantitative Research . Kennesaw State University.

Strengths of Using Quantitative Methods

Quantitative researchers try to recognize and isolate specific variables contained within the study framework, seek correlation, relationships and causality, and attempt to control the environment in which the data is collected to avoid the risk of variables, other than the one being studied, accounting for the relationships identified.

Among the specific strengths of using quantitative methods to study social science research problems:

  • Allows for a broader study, involving a greater number of subjects, and enhancing the generalization of the results;
  • Allows for greater objectivity and accuracy of results. Generally, quantitative methods are designed to provide summaries of data that support generalizations about the phenomenon under study. In order to accomplish this, quantitative research usually involves few variables and many cases, and employs prescribed procedures to ensure validity and reliability;
  • Applying well established standards means that the research can be replicated, and then analyzed and compared with similar studies;
  • You can summarize vast sources of information and make comparisons across categories and over time; and,
  • Personal bias can be avoided by keeping a 'distance' from participating subjects and using accepted computational techniques .

Babbie, Earl R. The Practice of Social Research . 12th ed. Belmont, CA: Wadsworth Cengage, 2010; Brians, Craig Leonard et al. Empirical Political Analysis: Quantitative and Qualitative Research Methods . 8th ed. Boston, MA: Longman, 2011; McNabb, David E. Research Methods in Public Administration and Nonprofit Management: Quantitative and Qualitative Approaches . 2nd ed. Armonk, NY: M.E. Sharpe, 2008; Singh, Kultar. Quantitative Social Research Methods . Los Angeles, CA: Sage, 2007.

Limitations of Using Quantitative Methods

Quantitative methods presume to have an objective approach to studying research problems, where data is controlled and measured, to address the accumulation of facts, and to determine the causes of behavior. As a consequence, the results of quantitative research may be statistically significant but are often humanly insignificant.

Some specific limitations associated with using quantitative methods to study research problems in the social sciences include:

  • Quantitative data is more efficient and able to test hypotheses, but may miss contextual detail;
  • Uses a static and rigid approach and so employs an inflexible process of discovery;
  • The development of standard questions by researchers can lead to "structural bias" and false representation, where the data actually reflects the view of the researcher instead of the participating subject;
  • Results provide less detail on behavior, attitudes, and motivation;
  • Researcher may collect a much narrower and sometimes superficial dataset;
  • Results are limited as they provide numerical descriptions rather than detailed narrative and generally provide less elaborate accounts of human perception;
  • The research is often carried out in an unnatural, artificial environment so that a level of control can be applied to the exercise. This level of control might not normally be in place in the real world thus yielding "laboratory results" as opposed to "real world results"; and,
  • Preset answers will not necessarily reflect how people really feel about a subject and, in some cases, might just be the closest match to the preconceived hypothesis.

Research Tip

Finding Examples of How to Apply Different Types of Research Methods

SAGE publications is a major publisher of studies about how to design and conduct research in the social and behavioral sciences. Their SAGE Research Methods Online and Cases database includes contents from books, articles, encyclopedias, handbooks, and videos covering social science research design and methods including the complete Little Green Book Series of Quantitative Applications in the Social Sciences and the Little Blue Book Series of Qualitative Research techniques. The database also includes case studies outlining the research methods used in real research projects. This is an excellent source for finding definitions of key terms and descriptions of research design and practice, techniques of data gathering, analysis, and reporting, and information about theories of research [e.g., grounded theory]. The database covers both qualitative and quantitative research methods as well as mixed methods approaches to conducting research.

SAGE Research Methods Online and Cases

  • << Previous: Qualitative Methods
  • Next: Insiderness >>
  • Last Updated: May 6, 2024 9:06 AM
  • URL: https://libguides.usc.edu/writingguide

Logo for Open Oregon Educational Resources

21 13. Experimental design

Chapter outline.

  • What is an experiment and when should you use one? (8 minute read)
  • True experimental designs (7 minute read)
  • Quasi-experimental designs (8 minute read)
  • Non-experimental designs (5 minute read)
  • Critical, ethical, and critical considerations  (5 minute read)

Content warning : examples in this chapter contain references to non-consensual research in Western history, including experiments conducted during the Holocaust and on African Americans (section 13.6).

13.1 What is an experiment and when should you use one?

Learning objectives.

Learners will be able to…

  • Identify the characteristics of a basic experiment
  • Describe causality in experimental design
  • Discuss the relationship between dependent and independent variables in experiments
  • Explain the links between experiments and generalizability of results
  • Describe advantages and disadvantages of experimental designs

The basics of experiments

The first experiment I can remember using was for my fourth grade science fair. I wondered if latex- or oil-based paint would hold up to sunlight better. So, I went to the hardware store and got a few small cans of paint and two sets of wooden paint sticks. I painted one with oil-based paint and the other with latex-based paint of different colors and put them in a sunny spot in the back yard. My hypothesis was that the oil-based paint would fade the most and that more fading would happen the longer I left the paint sticks out. (I know, it’s obvious, but I was only 10.)

I checked in on the paint sticks every few days for a month and wrote down my observations. The first part of my hypothesis ended up being wrong—it was actually the latex-based paint that faded the most. But the second part was right, and the paint faded more and more over time. This is a simple example, of course—experiments get a heck of a lot more complex than this when we’re talking about real research.

Merriam-Webster defines an experiment   as “an operation or procedure carried out under controlled conditions in order to discover an unknown effect or law, to test or establish a hypothesis, or to illustrate a known law.” Each of these three components of the definition will come in handy as we go through the different types of experimental design in this chapter. Most of us probably think of the physical sciences when we think of experiments, and for good reason—these experiments can be pretty flashy! But social science and psychological research follow the same scientific methods, as we’ve discussed in this book.

As the video discusses, experiments can be used in social sciences just like they can in physical sciences. It makes sense to use an experiment when you want to determine the cause of a phenomenon with as much accuracy as possible. Some types of experimental designs do this more precisely than others, as we’ll see throughout the chapter. If you’ll remember back to Chapter 11  and the discussion of validity, experiments are the best way to ensure internal validity, or the extent to which a change in your independent variable causes a change in your dependent variable.

Experimental designs for research projects are most appropriate when trying to uncover or test a hypothesis about the cause of a phenomenon, so they are best for explanatory research questions. As we’ll learn throughout this chapter, different circumstances are appropriate for different types of experimental designs. Each type of experimental design has advantages and disadvantages, and some are better at controlling the effect of extraneous variables —those variables and characteristics that have an effect on your dependent variable, but aren’t the primary variable whose influence you’re interested in testing. For example, in a study that tries to determine whether aspirin lowers a person’s risk of a fatal heart attack, a person’s race would likely be an extraneous variable because you primarily want to know the effect of aspirin.

In practice, many types of experimental designs can be logistically challenging and resource-intensive. As practitioners, the likelihood that we will be involved in some of the types of experimental designs discussed in this chapter is fairly low. However, it’s important to learn about these methods, even if we might not ever use them, so that we can be thoughtful consumers of research that uses experimental designs.

While we might not use all of these types of experimental designs, many of us will engage in evidence-based practice during our time as social workers. A lot of research developing evidence-based practice, which has a strong emphasis on generalizability, will use experimental designs. You’ve undoubtedly seen one or two in your literature search so far.

The logic of experimental design

How do we know that one phenomenon causes another? The complexity of the social world in which we practice and conduct research means that causes of social problems are rarely cut and dry. Uncovering explanations for social problems is key to helping clients address them, and experimental research designs are one road to finding answers.

As you read about in Chapter 8 (and as we’ll discuss again in Chapter 15 ), just because two phenomena are related in some way doesn’t mean that one causes the other. Ice cream sales increase in the summer, and so does the rate of violent crime; does that mean that eating ice cream is going to make me murder someone? Obviously not, because ice cream is great. The reality of that relationship is far more complex—it could be that hot weather makes people more irritable and, at times, violent, while also making people want ice cream. More likely, though, there are other social factors not accounted for in the way we just described this relationship.

Experimental designs can help clear up at least some of this fog by allowing researchers to isolate the effect of interventions on dependent variables by controlling extraneous variables . In true experimental design (discussed in the next section) and some quasi-experimental designs, researchers accomplish this w ith the control group and the experimental group . (The experimental group is sometimes called the “treatment group,” but we will call it the experimental group in this chapter.) The control group does not receive the intervention you are testing (they may receive no intervention or what is known as “treatment as usual”), while the experimental group does. (You will hopefully remember our earlier discussion of control variables in Chapter 8 —conceptually, the use of the word “control” here is the same.)

advantages of hypothesis in social research

In a well-designed experiment, your control group should look almost identical to your experimental group in terms of demographics and other relevant factors. What if we want to know the effect of CBT on social anxiety, but we have learned in prior research that men tend to have a more difficult time overcoming social anxiety? We would want our control and experimental groups to have a similar gender mix because it would limit the effect of gender on our results, since ostensibly, both groups’ results would be affected by gender in the same way. If your control group has 5 women, 6 men, and 4 non-binary people, then your experimental group should be made up of roughly the same gender balance to help control for the influence of gender on the outcome of your intervention. (In reality, the groups should be similar along other dimensions, as well, and your group will likely be much larger.) The researcher will use the same outcome measures for both groups and compare them, and assuming the experiment was designed correctly, get a pretty good answer about whether the intervention had an effect on social anxiety.

You will also hear people talk about comparison groups , which are similar to control groups. The primary difference between the two is that a control group is populated using random assignment, but a comparison group is not. Random assignment entails using a random process to decide which participants are put into the control or experimental group (which participants receive an intervention and which do not). By randomly assigning participants to a group, you can reduce the effect of extraneous variables on your research because there won’t be a systematic difference between the groups.

Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other related fields. Random sampling also helps a great deal with generalizability , whereas random assignment increases internal validity .

We have already learned about internal validity in Chapter 11 . The use of an experimental design will bolster internal validity since it works to isolate causal relationships. As we will see in the coming sections, some types of experimental design do this more effectively than others. It’s also worth considering that true experiments, which most effectively show causality , are often difficult and expensive to implement. Although other experimental designs aren’t perfect, they still produce useful, valid evidence and may be more feasible to carry out.

Key Takeaways

  • Experimental designs are useful for establishing causality, but some types of experimental design do this better than others.
  • Experiments help researchers isolate the effect of the independent variable on the dependent variable by controlling for the effect of extraneous variables .
  • Experiments use a control/comparison group and an experimental group to test the effects of interventions. These groups should be as similar to each other as possible in terms of demographics and other relevant factors.
  • True experiments have control groups with randomly assigned participants, while other types of experiments have comparison groups to which participants are not randomly assigned.
  • Think about the research project you’ve been designing so far. How might you use a basic experiment to answer your question? If your question isn’t explanatory, try to formulate a new explanatory question and consider the usefulness of an experiment.
  • Why is establishing a simple relationship between two variables not indicative of one causing the other?

13.2 True experimental design

  • Describe a true experimental design in social work research
  • Understand the different types of true experimental designs
  • Determine what kinds of research questions true experimental designs are suited for
  • Discuss advantages and disadvantages of true experimental designs

True experimental design , often considered to be the “gold standard” in research designs, is thought of as one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity and its ability to establish ( causality ) through treatment manipulation, while controlling for the effects of extraneous variable. Sometimes the treatment level is no treatment, while other times it is simply a different treatment than that which we are trying to evaluate. For example, we might have a control group that is made up of people who will not receive any treatment for a particular condition. Or, a control group could consist of people who consent to treatment with DBT when we are testing the effectiveness of CBT.

As we discussed in the previous section, a true experiment has a control group with participants randomly assigned , and an experimental group . This is the most basic element of a true experiment. The next decision a researcher must make is when they need to gather data during their experiment. Do they take a baseline measurement and then a measurement after treatment, or just a measurement after treatment, or do they handle measurement another way? Below, we’ll discuss the three main types of true experimental designs. There are sub-types of each of these designs, but here, we just want to get you started with some of the basics.

Using a true experiment in social work research is often pretty difficult, since as I mentioned earlier, true experiments can be quite resource intensive. True experiments work best with relatively large sample sizes, and random assignment, a key criterion for a true experimental design, is hard (and unethical) to execute in practice when you have people in dire need of an intervention. Nonetheless, some of the strongest evidence bases are built on true experiments.

For the purposes of this section, let’s bring back the example of CBT for the treatment of social anxiety. We have a group of 500 individuals who have agreed to participate in our study, and we have randomly assigned them to the control and experimental groups. The folks in the experimental group will receive CBT, while the folks in the control group will receive more unstructured, basic talk therapy. These designs, as we talked about above, are best suited for explanatory research questions.

Before we get started, take a look at the table below. When explaining experimental research designs, we often use diagrams with abbreviations to visually represent the experiment. Table 13.1 starts us off by laying out what each of the abbreviations mean.

Pretest and post-test control group design

In pretest and post-test control group design , participants are given a pretest of some kind to measure their baseline state before their participation in an intervention. In our social anxiety experiment, we would have participants in both the experimental and control groups complete some measure of social anxiety—most likely an established scale and/or a structured interview—before they start their treatment. As part of the experiment, we would have a defined time period during which the treatment would take place (let’s say 12 weeks, just for illustration). At the end of 12 weeks, we would give both groups the same measure as a post-test .

advantages of hypothesis in social research

In the diagram, RA (random assignment group A) is the experimental group and RB is the control group. O 1 denotes the pre-test, X e denotes the experimental intervention, and O 2 denotes the post-test. Let’s look at this diagram another way, using the example of CBT for social anxiety that we’ve been talking about.

advantages of hypothesis in social research

In a situation where the control group received treatment as usual instead of no intervention, the diagram would look this way, with X i denoting treatment as usual (Figure 13.3).

advantages of hypothesis in social research

Hopefully, these diagrams provide you a visualization of how this type of experiment establishes time order , a key component of a causal relationship. Did the change occur after the intervention? Assuming there is a change in the scores between the pretest and post-test, we would be able to say that yes, the change did occur after the intervention. Causality can’t exist if the change happened before the intervention—this would mean that something else led to the change, not our intervention.

Post-test only control group design

Post-test only control group design involves only giving participants a post-test, just like it sounds (Figure 13.4).

advantages of hypothesis in social research

But why would you use this design instead of using a pretest/post-test design? One reason could be the testing effect that can happen when research participants take a pretest. In research, the testing effect refers to “measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself” (Engel & Schutt, 2017, p. 444) [1] (When we say “measurement error,” all we mean is the accuracy of the way we measure the dependent variable.) Figure 13.4 is a visualization of this type of experiment. The testing effect isn’t always bad in practice—our initial assessments might help clients identify or put into words feelings or experiences they are having when they haven’t been able to do that before. In research, however, we might want to control its effects to isolate a cleaner causal relationship between intervention and outcome.

Going back to our CBT for social anxiety example, we might be concerned that participants would learn about social anxiety symptoms by virtue of taking a pretest. They might then identify that they have those symptoms on the post-test, even though they are not new symptoms for them. That could make our intervention look less effective than it actually is.

However, without a baseline measurement establishing causality can be more difficult. If we don’t know someone’s state of mind before our intervention, how do we know our intervention did anything at all? Establishing time order is thus a little more difficult. You must balance this consideration with the benefits of this type of design.

Solomon four group design

One way we can possibly measure how much the testing effect might change the results of the experiment is with the Solomon four group design. Basically, as part of this experiment, you have two control groups and two experimental groups. The first pair of groups receives both a pretest and a post-test. The other pair of groups receives only a post-test (Figure 13.5). This design helps address the problem of establishing time order in post-test only control group designs.

advantages of hypothesis in social research

For our CBT project, we would randomly assign people to four different groups instead of just two. Groups A and B would take our pretest measures and our post-test measures, and groups C and D would take only our post-test measures. We could then compare the results among these groups and see if they’re significantly different between the folks in A and B, and C and D. If they are, we may have identified some kind of testing effect, which enables us to put our results into full context. We don’t want to draw a strong causal conclusion about our intervention when we have major concerns about testing effects without trying to determine the extent of those effects.

Solomon four group designs are less common in social work research, primarily because of the logistics and resource needs involved. Nonetheless, this is an important experimental design to consider when we want to address major concerns about testing effects.

  • True experimental design is best suited for explanatory research questions.
  • True experiments require random assignment of participants to control and experimental groups.
  • Pretest/post-test research design involves two points of measurement—one pre-intervention and one post-intervention.
  • Post-test only research design involves only one point of measurement—post-intervention. It is a useful design to minimize the effect of testing effects on our results.
  • Solomon four group research design involves both of the above types of designs, using 2 pairs of control and experimental groups. One group receives both a pretest and a post-test, while the other receives only a post-test. This can help uncover the influence of testing effects.
  • Think about a true experiment you might conduct for your research project. Which design would be best for your research, and why?
  • What challenges or limitations might make it unrealistic (or at least very complicated!) for you to carry your true experimental design in the real-world as a student researcher?
  • What hypothesis(es) would you test using this true experiment?

13.4 Quasi-experimental designs

  • Describe a quasi-experimental design in social work research
  • Understand the different types of quasi-experimental designs
  • Determine what kinds of research questions quasi-experimental designs are suited for
  • Discuss advantages and disadvantages of quasi-experimental designs

Quasi-experimental designs are a lot more common in social work research than true experimental designs. Although quasi-experiments don’t do as good a job of giving us robust proof of causality , they still allow us to establish time order , which is a key element of causality. The prefix quasi means “resembling,” so quasi-experimental research is research that resembles experimental research, but is not true experimental research. Nonetheless, given proper research design, quasi-experiments can still provide extremely rigorous and useful results.

There are a few key differences between true experimental and quasi-experimental research. The primary difference between quasi-experimental research and true experimental research is that quasi-experimental research does not involve random assignment to control and experimental groups. Instead, we talk about comparison groups in quasi-experimental research instead. As a result, these types of experiments don’t control the effect of extraneous variables as well as a true experiment.

Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention.  We’re able to eliminate some threats to internal validity, but we can’t do this as effectively as we can with a true experiment.  Realistically, our CBT-social anxiety project is likely to be a quasi experiment, based on the resources and participant pool we’re likely to have available. 

It’s important to note that not all quasi-experimental designs have a comparison group.  There are many different kinds of quasi-experiments, but we will discuss the three main types below: nonequivalent comparison group designs, time series designs, and ex post facto comparison group designs.

Nonequivalent comparison group design

You will notice that this type of design looks extremely similar to the pretest/post-test design that we discussed in section 13.3. But instead of random assignment to control and experimental groups, researchers use other methods to construct their comparison and experimental groups. A diagram of this design will also look very similar to pretest/post-test design, but you’ll notice we’ve removed the “R” from our groups, since they are not randomly assigned (Figure 13.6).

advantages of hypothesis in social research

Researchers using this design select a comparison group that’s as close as possible based on relevant factors to their experimental group. Engel and Schutt (2017) [2] identify two different selection methods:

  • Individual matching : Researchers take the time to match individual cases in the experimental group to similar cases in the comparison group. It can be difficult, however, to match participants on all the variables you want to control for.
  • Aggregate matching : Instead of trying to match individual participants to each other, researchers try to match the population profile of the comparison and experimental groups. For example, researchers would try to match the groups on average age, gender balance, or median income. This is a less resource-intensive matching method, but researchers have to ensure that participants aren’t choosing which group (comparison or experimental) they are a part of.

As we’ve already talked about, this kind of design provides weaker evidence that the intervention itself leads to a change in outcome. Nonetheless, we are still able to establish time order using this method, and can thereby show an association between the intervention and the outcome. Like true experimental designs, this type of quasi-experimental design is useful for explanatory research questions.

What might this look like in a practice setting? Let’s say you’re working at an agency that provides CBT and other types of interventions, and you have identified a group of clients who are seeking help for social anxiety, as in our earlier example. Once you’ve obtained consent from your clients, you can create a comparison group using one of the matching methods we just discussed. If the group is small, you might match using individual matching, but if it’s larger, you’ll probably sort people by demographics to try to get similar population profiles. (You can do aggregate matching more easily when your agency has some kind of electronic records or database, but it’s still possible to do manually.)

Time series design

Another type of quasi-experimental design is a time series design. Unlike other types of experimental design, time series designs do not have a comparison group. A time series is a set of measurements taken at intervals over a period of time (Figure 13.7). Proper time series design should include at least three pre- and post-intervention measurement points. While there are a few types of time series designs, we’re going to focus on the most common: interrupted time series design.

advantages of hypothesis in social research

But why use this method? Here’s an example. Let’s think about elementary student behavior throughout the school year. As anyone with children or who is a teacher knows, kids get very excited and animated around holidays, days off, or even just on a Friday afternoon. This fact might mean that around those times of year, there are more reports of disruptive behavior in classrooms. What if we took our one and only measurement in mid-December? It’s possible we’d see a higher-than-average rate of disruptive behavior reports, which could bias our results if our next measurement is around a time of year students are in a different, less excitable frame of mind. When we take multiple measurements throughout the first half of the school year, we can establish a more accurate baseline for the rate of these reports by looking at the trend over time.

We may want to test the effect of extended recess times in elementary school on reports of disruptive behavior in classrooms. When students come back after the winter break, the school extends recess by 10 minutes each day (the intervention), and the researchers start tracking the monthly reports of disruptive behavior again. These reports could be subject to the same fluctuations as the pre-intervention reports, and so we once again take multiple measurements over time to try to control for those fluctuations.

This method improves the extent to which we can establish causality because we are accounting for a major extraneous variable in the equation—the passage of time. On its own, it does not allow us to account for other extraneous variables, but it does establish time order and association between the intervention and the trend in reports of disruptive behavior. Finding a stable condition before the treatment that changes after the treatment is evidence for causality between treatment and outcome.

Ex post facto comparison group design

Ex post facto (Latin for “after the fact”) designs are extremely similar to nonequivalent comparison group designs. There are still comparison and experimental groups, pretest and post-test measurements, and an intervention. But in ex post facto designs, participants are assigned to the comparison and experimental groups once the intervention has already happened. This type of design often occurs when interventions are already up and running at an agency and the agency wants to assess effectiveness based on people who have already completed treatment.

In most clinical agency environments, social workers conduct both initial and exit assessments, so there are usually some kind of pretest and post-test measures available. We also typically collect demographic information about our clients, which could allow us to try to use some kind of matching to construct comparison and experimental groups.

In terms of internal validity and establishing causality, ex post facto designs are a bit of a mixed bag. The ability to establish causality depends partially on the ability to construct comparison and experimental groups that are demographically similar so we can control for these extraneous variables .

Quasi-experimental designs are common in social work intervention research because, when designed correctly, they balance the intense resource needs of true experiments with the realities of research in practice. They still offer researchers tools to gather robust evidence about whether interventions are having positive effects for clients.

  • Quasi-experimental designs are similar to true experiments, but do not require random assignment to experimental and control groups.
  • In quasi-experimental projects, the group not receiving the treatment is called the comparison group, not the control group.
  • Nonequivalent comparison group design is nearly identical to pretest/post-test experimental design, but participants are not randomly assigned to the experimental and control groups. As a result, this design provides slightly less robust evidence for causality.
  • Nonequivalent groups can be constructed by individual matching or aggregate matching .
  • Time series design does not have a control or experimental group, and instead compares the condition of participants before and after the intervention by measuring relevant factors at multiple points in time. This allows researchers to mitigate the error introduced by the passage of time.
  • Ex post facto comparison group designs are also similar to true experiments, but experimental and comparison groups are constructed after the intervention is over. This makes it more difficult to control for the effect of extraneous variables, but still provides useful evidence for causality because it maintains the time order[ /pb_glossary] of the experiment.
  • Think back to the experiment you considered for your research project in Section 13.3. Now that you know more about quasi-experimental designs, do you still think it's a true experiment? Why or why not?
  • What should you consider when deciding whether an experimental or quasi-experimental design would be more feasible or fit your research question better?

13.5 Non-experimental designs

Learners will be able to...

  • Describe non-experimental designs in social work research
  • Discuss how non-experimental research differs from true and quasi-experimental research
  • Demonstrate an understanding the different types of non-experimental designs
  • Determine what kinds of research questions non-experimental designs are suited for
  • Discuss advantages and disadvantages of non-experimental designs

The previous sections have laid out the basics of some rigorous approaches to establish that an intervention is responsible for changes we observe in research participants. This type of evidence is extremely important to build an evidence base for social work interventions, but it's not the only type of evidence to consider. We will discuss qualitative methods, which provide us with rich, contextual information, in Part 4 of this text. The designs we'll talk about in this section are sometimes used in [pb_glossary id="851"] qualitative research, but in keeping with our discussion of experimental design so far, we're going to stay in the quantitative research realm for now. Non-experimental is also often a stepping stone for more rigorous experimental design in the future, as it can help test the feasibility of your research.

In general, non-experimental designs do not strongly support causality and don't address threats to internal validity. However, that's not really what they're intended for. Non-experimental designs are useful for a few different types of research, including explanatory questions in program evaluation. Certain types of non-experimental design are also helpful for researchers when they are trying to develop a new assessment or scale. Other times, researchers or agency staff did not get a chance to gather any assessment information before an intervention began, so a pretest/post-test design is not possible.

A genderqueer person sitting on a couch, talking to a therapist in a brightly-lit room

A significant benefit of these types of designs is that they're pretty easy to execute in a practice or agency setting. They don't require a comparison or control group, and as Engel and Schutt (2017) [3] point out, they "flow from a typical practice model of assessment, intervention, and evaluating the impact of the intervention" (p. 177). Thus, these designs are fairly intuitive for social workers, even when they aren't expert researchers. Below, we will go into some detail about the different types of non-experimental design.

One group pretest/post-test design

Also known as a before-after one-group design, this type of research design does not have a comparison group and everyone who participates in the research receives the intervention (Figure 13.8). This is a common type of design in program evaluation in the practice world. Controlling for extraneous variables is difficult or impossible in this design, but given that it is still possible to establish some measure of time order, it does provide weak support for causality.

advantages of hypothesis in social research

Imagine, for example, a researcher who is interested in the effectiveness of an anti-drug education program on elementary school students’ attitudes toward illegal drugs. The researcher could assess students' attitudes about illegal drugs (O 1 ), implement the anti-drug program (X), and then immediately after the program ends, the researcher could once again measure students’ attitudes toward illegal drugs (O 2 ). You can see how this would be relatively simple to do in practice, and have probably been involved in this type of research design yourself, even if informally. But hopefully, you can also see that this design would not provide us with much evidence for causality because we have no way of controlling for the effect of extraneous variables. A lot of things could have affected any change in students' attitudes—maybe girls already had different attitudes about illegal drugs than children of other genders, and when we look at the class's results as a whole, we couldn't account for that influence using this design.

All of that doesn't mean these results aren't useful, however. If we find that children's attitudes didn't change at all after the drug education program, then we need to think seriously about how to make it more effective or whether we should be using it at all. (This immediate, practical application of our results highlights a key difference between program evaluation and research, which we will discuss in Chapter 23 .)

After-only design

As the name suggests, this type of non-experimental design involves measurement only after an intervention. There is no comparison or control group, and everyone receives the intervention. I have seen this design repeatedly in my time as a program evaluation consultant for nonprofit organizations, because often these organizations realize too late that they would like to or need to have some sort of measure of what effect their programs are having.

Because there is no pretest and no comparison group, this design is not useful for supporting causality since we can't establish the time order and we can't control for extraneous variables. However, that doesn't mean it's not useful at all! Sometimes, agencies need to gather information about how their programs are functioning. A classic example of this design is satisfaction surveys—realistically, these can only be administered after a program or intervention. Questions regarding satisfaction, ease of use or engagement, or other questions that don't involve comparisons are best suited for this type of design.

Static-group design

A final type of non-experimental research is the static-group design. In this type of research, there are both comparison and experimental groups, which are not randomly assigned. There is no pretest, only a post-test, and the comparison group has to be constructed by the researcher. Sometimes, researchers will use matching techniques to construct the groups, but often, the groups are constructed by convenience of who is being served at the agency.

Non-experimental research designs are easy to execute in practice, but we must be cautious about drawing causal conclusions from the results. A positive result may still suggest that we should continue using a particular intervention (and no result or a negative result should make us reconsider whether we should use that intervention at all). You have likely seen non-experimental research in your daily life or at your agency, and knowing the basics of how to structure such a project will help you ensure you are providing clients with the best care possible.

  • Non-experimental designs are useful for describing phenomena, but cannot demonstrate causality.
  • After-only designs are often used in agency and practice settings because practitioners are often not able to set up pre-test/post-test designs.
  • Non-experimental designs are useful for explanatory questions in program evaluation and are helpful for researchers when they are trying to develop a new assessment or scale.
  • Non-experimental designs are well-suited to qualitative methods.
  • If you were to use a non-experimental design for your research project, which would you choose? Why?
  • Have you conducted non-experimental research in your practice or professional life? Which type of non-experimental design was it?

13.6 Critical, ethical, and cultural considerations

  • Describe critiques of experimental design
  • Identify ethical issues in the design and execution of experiments
  • Identify cultural considerations in experimental design

As I said at the outset, experiments, and especially true experiments, have long been seen as the gold standard to gather scientific evidence. When it comes to research in the biomedical field and other physical sciences, true experiments are subject to far less nuance than experiments in the social world. This doesn't mean they are easier—just subject to different forces. However, as a society, we have placed the most value on quantitative evidence obtained through empirical observation and especially experimentation.

Major critiques of experimental designs tend to focus on true experiments, especially randomized controlled trials (RCTs), but many of these critiques can be applied to quasi-experimental designs, too. Some researchers, even in the biomedical sciences, question the view that RCTs are inherently superior to other types of quantitative research designs. RCTs are far less flexible and have much more stringent requirements than other types of research. One seemingly small issue, like incorrect information about a research participant, can derail an entire RCT. RCTs also cost a great deal of money to implement and don't reflect “real world” conditions. The cost of true experimental research or RCTs also means that some communities are unlikely to ever have access to these research methods. It is then easy for people to dismiss their research findings because their methods are seen as "not rigorous."

Obviously, controlling outside influences is important for researchers to draw strong conclusions, but what if those outside influences are actually important for how an intervention works? Are we missing really important information by focusing solely on control in our research? Is a treatment going to work the same for white women as it does for indigenous women? With the myriad effects of our societal structures, you should be very careful ever assuming this will be the case. This doesn't mean that cultural differences will negate the effect of an intervention; instead, it means that you should remember to practice cultural humility implementing all interventions, even when we "know" they work.

How we build evidence through experimental research reveals a lot about our values and biases, and historically, much experimental research has been conducted on white people, and especially white men. [4] This makes sense when we consider the extent to which the sciences and academia have historically been dominated by white patriarchy. This is especially important for marginalized groups that have long been ignored in research literature, meaning they have also been ignored in the development of interventions and treatments that are accepted as "effective." There are examples of marginalized groups being experimented on without their consent, like the Tuskegee Experiment or Nazi experiments on Jewish people during World War II. We cannot ignore the collective consciousness situations like this can create about experimental research for marginalized groups.

None of this is to say that experimental research is inherently bad or that you shouldn't use it. Quite the opposite—use it when you can, because there are a lot of benefits, as we learned throughout this chapter. As a social work researcher, you are uniquely positioned to conduct experimental research while applying social work values and ethics to the process and be a leader for others to conduct research in the same framework. It can conflict with our professional ethics, especially respect for persons and beneficence, if we do not engage in experimental research with our eyes wide open. We also have the benefit of a great deal of practice knowledge that researchers in other fields have not had the opportunity to get. As with all your research, always be sure you are fully exploring the limitations of the research.

  • While true experimental research gathers strong evidence, it can also be inflexible, expensive, and overly simplistic in terms of important social forces that affect the resources.
  • Marginalized communities' past experiences with experimental research can affect how they respond to research participation.
  • Social work researchers should use both their values and ethics, and their practice experiences, to inform research and push other researchers to do the same.
  • Think back to the true experiment you sketched out in the exercises for Section 13.3. Are there cultural or historical considerations you hadn't thought of with your participant group? What are they? Does this change the type of experiment you would want to do?
  • How can you as a social work researcher encourage researchers in other fields to consider social work ethics and values in their experimental research?
  • Engel, R. & Schutt, R. (2016). The practice of research in social work. Thousand Oaks, CA: SAGE Publications, Inc. ↵
  • Sullivan, G. M. (2011). Getting off the “gold standard”: Randomized controlled trials and education research. Journal of Graduate Medical Education ,  3 (3), 285-289. ↵

an operation or procedure carried out under controlled conditions in order to discover an unknown effect or law, to test or establish a hypothesis, or to illustrate a known law.

explains why particular phenomena work in the way that they do; answers “why” questions

variables and characteristics that have an effect on your outcome, but aren't the primary variable whose influence you're interested in testing.

the group of participants in our study who do not receive the intervention we are researching in experiments with random assignment

in experimental design, the group of participants in our study who do receive the intervention we are researching

the group of participants in our study who do not receive the intervention we are researching in experiments without random assignment

using a random process to decide which participants are tested in which conditions

The ability to apply research findings beyond the study sample to some broader population,

Ability to say that one variable "causes" something to happen to another variable. Very important to assess when thinking about studies that examine causation such as experimental or quasi-experimental designs.

the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief

An experimental design in which one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed

a type of experimental design in which participants are randomly assigned to control and experimental groups, one group receives an intervention, and both groups receive pre- and post-test assessments

A measure of a participant's condition before they receive an intervention or treatment.

A measure of a participant's condition after an intervention or, if they are part of the control/comparison group, at the end of an experiment.

A demonstration that a change occurred after an intervention. An important criterion for establishing causality.

an experimental design in which participants are randomly assigned to control and treatment groups, one group receives an intervention, and both groups receive only a post-test assessment

The measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself

a subtype of experimental design that is similar to a true experiment, but does not have randomly assigned control and treatment groups

In nonequivalent comparison group designs, the process by which researchers match individual cases in the experimental group to similar cases in the comparison group.

In nonequivalent comparison group designs, the process in which researchers match the population profile of the comparison and experimental groups.

a set of measurements taken at intervals over a period of time

Graduate research methods in social work Copyright © 2021 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Discovering Research Hypotheses in Social Science Using Knowledge Graph Embeddings

  • Conference paper
  • First Online: 31 May 2021
  • Cite this conference paper

advantages of hypothesis in social research

  • Rosaline de Haan 16 ,
  • Ilaria Tiddi 17 &
  • Wouter Beek 16  

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12731))

Included in the following conference series:

  • European Semantic Web Conference

2621 Accesses

4 Citations

In an era of ever-increasing scientific publications available, scientists struggle to keep pace with the literature, interpret research results and identify new research hypotheses to falsify. This is particularly in fields such as the social sciences, where automated support for scientific discovery is still widely unavailable and unimplemented. In this work, we introduce an automated system that supports social scientists in identifying new research hypotheses. With the idea that knowledge graphs help modeling domain-specific information, and that machine learning can be used to identify the most relevant facts therein, we frame the problem of hypothesis discovery as a link prediction task, where the ComplEx model is used to predict new relationships between entities of a knowledge graph representing scientific papers and their experimental details. The final output consists in fully formulated hypotheses including the newly discovered triples (hypothesis statement), along with supporting statements from the knowledge graph (hypothesis evidence and hypothesis history). A quantitative and qualitative evaluation is carried using experts in the field. Encouraging results show that a simple combination of machine learning and knowledge graph methods can serve as a basis for automated scientific discovery.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

http://geneontology.org/ .

http://www.obofoundry.org/ .

https://linkeddata.cochrane.org/pico-ontology .

https://www.orkg.org/orkg/ .

http://data.cooperationdatabank.org/ .

CODA contains two types of effect size measures, i.e. the correlation coefficient \(\rho \) and the standardized mean difference d , which can be easily converted to one another. For simplicity, we will only refer to Cohen’s d values from now on.

https://data.cooperationdatabank.org/coda/-/queries/link-prediction-selection-query .

Due to the relatively small sets, medium and large effects were grouped together.

https://data.cooperationdatabank.org/coda/-/queries/Rosaline-Construct-Link-Prediction .

https://github.com/Accenture/AmpliGraph .

https://github.com/roosyay/CoDa_Hypotheses .

https://coda.triply.cc/ .

Bahler, D., Stone, B., Wellington, C., Bristol, D.W.: Symbolic, neural, and Bayesian machine learning models for predicting carcinogenicity of chemical compounds. J. Chem. Inf. Comput. Sci. 40 (4), 906–914 (2000). https://doi.org/10.1021/ci990116i

Article   Google Scholar  

Bianchi, F., Rossiello, G., Costabello, L., Palmonari, M., Minervini, P.: Knowledge graph embeddings and explainable AI (April 2020). https://doi.org/10.3233/SSW200011

Chen, N.C., Drouhard, M., Kocielnik, R., Suh, J., Aragon, C.R.: Using machine learning to support qualitative coding in social science: shifting the focus to ambiguity. ACM Trans. Interact. Intell. Syst. 8 (2), 1–3 (2018). https://doi.org/10.1145/3185515

Clark, T., Ciccarese, P.N., Goble, C.A.: Micropublications: a semantic model for claims, evidence, arguments and annotations in biomedical communications. J. Biomed. Semant. 5 (1), 1–33 (2014). https://doi.org/10.1186/2041-1480-5-28

Dessì, D., Osborne, F., Reforgiato Recupero, D., Buscaldi, D., Motta, E., Sack, H.: AI-KG: an automatically generated knowledge graph of artificial intelligence. In: Pan, J.Z., et al. (eds.) ISWC 2020. LNCS, vol. 12507, pp. 127–143. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-62466-8_9

Chapter   Google Scholar  

Garijo, D., et al.: Towards automated hypothesis testing in neuroscience. In: Gadepally, V., et al. (eds.) DMAH/Poly -2019. LNCS, vol. 11721, pp. 249–257. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33752-0_18

Garijo, D., Gil, Y., Ratnakar, V.: The DISK hypothesis ontology: capturing hypothesis evolution for automated discovery. CEUR Workshop Proc. 2065 , 40–46 (2017)

Google Scholar  

Groth, P., Gibson, A., Velterop, J.: The anatomy of a nanopublication. Inf. Serv. Use 30 (1–2), 51–56 (2010). https://doi.org/10.3233/ISU-2010-0613

Huang, S., Wan, X.: AKMiner: domain-specific knowledge graph mining from academic literatures. In: Lin, X., Manolopoulos, Y., Srivastava, D., Huang, G. (eds.) WISE 2013. LNCS, vol. 8181, pp. 241–255. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41154-0_18

Katukuri, J.R., Xie, Y., Raghavan, V.V., Gupta, A.: Hypotheses generation as supervised link discovery with automated class labeling on large-scale biomedical concept networks. BMC Genomics 13 (Suppl 3), 12–15 (2012). https://doi.org/10.1186/1471-2164-13-s3-s5

Nagarajan, M., et al.: Predicting future scientific discoveries based on a networked analysis of the past literature. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2019–2028 (2015)

Natarajan, N., Dhillon, I.S.: Inductive matrix completion for predicting gene-disease associations. Bioinf. 30 (12), 60–68 (2014). https://doi.org/10.1093/bioinformatics/btu269

Nickel, M., Murphy, K., Tresp, V., Gabrilovich, E.: A review of relational machine learning for knowledge graphs (2016). https://doi.org/10.1109/JPROC.2015.2483592

Nye, B., et al.: A corpus with multi-level annotations of patients, interventions and outcomes to support language processing for medical literature. In: ACL 2018, vol. 1, pp. 197–207 (2018). https://doi.org/10.18653/v1/p18-1019

Pankratius, V., et al.: Computer-aided discovery: toward scientific insight generation with machine support why scientists need machine support for discovery search. IEEE Intell. Syst. 31 (4), 3–10 (2016). https://doi.org/10.1109/MIS.2016.60

Sang, S., et al.: GrEDeL: a knowledge graph embedding based method for drug discovery from biomedical literatures. IEEE Access 7 (2016), 8404–8415 (2019). https://doi.org/10.1109/ACCESS.2018.2886311

Sateli, B., Witte, R.: Semantic representation of scientific literature: bringing claims, contributions and named entities onto the Linked Open Data cloud. PeerJ Comput. Sci. 2015 (12), 1-e37 (2015). https://doi.org/10.7717/peerj-cs.37

Sawilowsky, S.S.: New Effect Size Rules of Thumb. J. Mod. Appl. Stat. Methods 8 (2), 597–599 (2009). https://doi.org/10.22237/jmasm/1257035100

Article   MathSciNet   Google Scholar  

Sosa, D.N., Derry, A., Guo, M., Wei, E., Brinton, C., Altman, R.B.: A literature-based knowledge graph embedding method for identifying drug repurposing opportunities in rare diseases. Pacific Symposium on Biocomputing 25 , 463–474 (2020). https://doi.org/10.1142/9789811215636_0041

Srinivasan, P.: Text mining: generating hypotheses from MEDLINE. J. Am. Soc. Inf. Sci. Technol. 55 (5), 396–413 (2004). https://doi.org/10.1002/asi.10389

Swanson, D.R., Smalheiser, N.R.: An interactive system for finding complementary literatures: a stimulus to scientific discovery. Artif. Intell. 91 (2), 183–203 (1997). https://doi.org/10.1016/S0004-3702(97)00008-8

Article   MATH   Google Scholar  

Tiddi, I., Balliet, D., ten Teije, A.: Fostering scientific meta-analyses with knowledge graphs: a case-study. In: Harth, A., et al. (eds.) ESWC 2020. LNCS, vol. 12123, pp. 287–303. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-49461-2_17

Trouillon, T., Welbl, J., Riedel, S., Ciaussier, E., Bouchard, G.: Complex embeddings for simple link prediction. In: 33rd International Conference on Machine Learning, ICML 2016, vol. 5, pp. 3021–3032 (2016)

Wallace, B.C., Kuiper, J., Sharma, A., Zhu, M., Marshall, I.J.: Extracting PICO sentences from clinical trial reports using supervised distant supervision (2016)

Ware, M., Mabe, M.: The STM report: an overview of scientific and scholarly journal publishing (2015)

Download references

Author information

Authors and affiliations.

Triply, Amsterdam, The Netherlands

Rosaline de Haan & Wouter Beek

Vrije Universiteit Amsterdam, Amsterdam, The Netherlands

Ilaria Tiddi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ilaria Tiddi .

Editor information

Editors and affiliations.

Ghent University, Ghent, Belgium

Ruben Verborgh

Aalborg University, Aalborg, Denmark

University of Mannheim, Mannheim, Germany

Heiko Paulheim

ERCIM, Sophia Antipolis, France

Pierre-Antoine Champin

University of Siegen, Siegen, Germany

Maria Maleshkova

Universidad Politécnica de Madrid, Boadilla del Monte, Spain

Oscar Corcho

eBay Inc., San Jose, CA, USA

Petar Ristoski

FIZ Karlsruhe - Leibniz Institute for Information Infrastructure, Eggenstein-Leopoldshafen, Germany

Mehwish Alam

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Cite this paper.

de Haan, R., Tiddi, I., Beek, W. (2021). Discovering Research Hypotheses in Social Science Using Knowledge Graph Embeddings. In: Verborgh, R., et al. The Semantic Web. ESWC 2021. Lecture Notes in Computer Science(), vol 12731. Springer, Cham. https://doi.org/10.1007/978-3-030-77385-4_28

Download citation

DOI : https://doi.org/10.1007/978-3-030-77385-4_28

Published : 31 May 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-77384-7

Online ISBN : 978-3-030-77385-4

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Sampling is the statistical process of selecting a subset—called a ‘sample’—of a population of interest for the purpose of making observations and statistical inferences about that population. Social science research is generally about inferring patterns of behaviours within specific populations. We cannot study entire populations because of feasibility and cost constraints, and hence, we must select a representative sample from the population of interest for observation and analysis. It is extremely important to choose a sample that is truly representative of the population so that the inferences derived from the sample can be generalised back to the population of interest. Improper and biased sampling is the primary reason for the often divergent and erroneous inferences reported in opinion polls and exit polls conducted by different polling groups such as CNN/Gallup Poll, ABC, and CBS, prior to every US Presidential election.

The sampling process

As Figure 8.1 shows, the sampling process comprises of several stages. The first stage is defining the target population. A population can be defined as all people or items ( unit of analysis ) with the characteristics that one wishes to study. The unit of analysis may be a person, group, organisation, country, object, or any other entity that you wish to draw scientific inferences about. Sometimes the population is obvious. For example, if a manufacturer wants to determine whether finished goods manufactured at a production line meet certain quality requirements or must be scrapped and reworked, then the population consists of the entire set of finished goods manufactured at that production facility. At other times, the target population may be a little harder to understand. If you wish to identify the primary drivers of academic learning among high school students, then what is your target population: high school students, their teachers, school principals, or parents? The right answer in this case is high school students, because you are interested in their performance, not the performance of their teachers, parents, or schools. Likewise, if you wish to analyse the behaviour of roulette wheels to identify biased wheels, your population of interest is not different observations from a single roulette wheel, but different roulette wheels (i.e., their behaviour over an infinite set of wheels).

The sampling process

The second step in the sampling process is to choose a sampling frame . This is an accessible section of the target population—usually a list with contact information—from where a sample can be drawn. If your target population is professional employees at work, because you cannot access all professional employees around the world, a more realistic sampling frame will be employee lists of one or two local companies that are willing to participate in your study. If your target population is organisations, then the Fortune 500 list of firms or the Standard & Poor’s (S&P) list of firms registered with the New York Stock exchange may be acceptable sampling frames.

Note that sampling frames may not entirely be representative of the population at large, and if so, inferences derived by such a sample may not be generalisable to the population. For instance, if your target population is organisational employees at large (e.g., you wish to study employee self-esteem in this population) and your sampling frame is employees at automotive companies in the American Midwest, findings from such groups may not even be generalisable to the American workforce at large, let alone the global workplace. This is because the American auto industry has been under severe competitive pressures for the last 50 years and has seen numerous episodes of reorganisation and downsizing, possibly resulting in low employee morale and self-esteem. Furthermore, the majority of the American workforce is employed in service industries or in small businesses, and not in automotive industry. Hence, a sample of American auto industry employees is not particularly representative of the American workforce. Likewise, the Fortune 500 list includes the 500 largest American enterprises, which is not representative of all American firms, most of which are medium or small sized firms rather than large firms, and is therefore, a biased sampling frame. In contrast, the S&P list will allow you to select large, medium, and/or small companies, depending on whether you use the S&P LargeCap, MidCap, or SmallCap lists, but includes publicly traded firms (and not private firms) and is hence still biased. Also note that the population from which a sample is drawn may not necessarily be the same as the population about which we actually want information. For example, if a researcher wants to examine the success rate of a new ‘quit smoking’ program, then the target population is the universe of smokers who had access to this program, which may be an unknown population. Hence, the researcher may sample patients arriving at a local medical facility for smoking cessation treatment, some of whom may not have had exposure to this particular ‘quit smoking’ program, in which case, the sampling frame does not correspond to the population of interest.

The last step in sampling is choosing a sample from the sampling frame using a well-defined sampling technique. Sampling techniques can be grouped into two broad categories: probability (random) sampling and non-probability sampling. Probability sampling is ideal if generalisability of results is important for your study, but there may be unique circumstances where non-probability sampling can also be justified. These techniques are discussed in the next two sections.

Probability sampling

Probability sampling is a technique in which every unit in the population has a chance (non-zero probability) of being selected in the sample, and this chance can be accurately determined. Sample statistics thus produced, such as sample mean or standard deviation, are unbiased estimates of population parameters, as long as the sampled units are weighted according to their probability of selection. All probability sampling have two attributes in common: every unit in the population has a known non-zero probability of being sampled, and the sampling procedure involves random selection at some point. The different types of probability sampling techniques include:

n

Stratified sampling. In stratified sampling, the sampling frame is divided into homogeneous and non-overlapping subgroups (called ‘strata’), and a simple random sample is drawn within each subgroup. In the previous example of selecting 200 firms from a list of 1,000 firms, you can start by categorising the firms based on their size as large (more than 500 employees), medium (between 50 and 500 employees), and small (less than 50 employees). You can then randomly select 67 firms from each subgroup to make up your sample of 200 firms. However, since there are many more small firms in a sampling frame than large firms, having an equal number of small, medium, and large firms will make the sample less representative of the population (i.e., biased in favour of large firms that are fewer in number in the target population). This is called non-proportional stratified sampling because the proportion of the sample within each subgroup does not reflect the proportions in the sampling frame—or the population of interest—and the smaller subgroup (large-sized firms) is oversampled . An alternative technique will be to select subgroup samples in proportion to their size in the population. For instance, if there are 100 large firms, 300 mid-sized firms, and 600 small firms, you can sample 20 firms from the ‘large’ group, 60 from the ‘medium’ group and 120 from the ‘small’ group. In this case, the proportional distribution of firms in the population is retained in the sample, and hence this technique is called proportional stratified sampling. Note that the non-proportional approach is particularly effective in representing small subgroups, such as large-sized firms, and is not necessarily less representative of the population compared to the proportional approach, as long as the findings of the non-proportional approach are weighted in accordance to a subgroup’s proportion in the overall population.

Cluster sampling. If you have a population dispersed over a wide geographic region, it may not be feasible to conduct a simple random sampling of the entire population. In such case, it may be reasonable to divide the population into ‘clusters’—usually along geographic boundaries—randomly sample a few clusters, and measure all units within that cluster. For instance, if you wish to sample city governments in the state of New York, rather than travel all over the state to interview key city officials (as you may have to do with a simple random sample), you can cluster these governments based on their counties, randomly select a set of three counties, and then interview officials from every office in those counties. However, depending on between-cluster differences, the variability of sample estimates in a cluster sample will generally be higher than that of a simple random sample, and hence the results are less generalisable to the population than those obtained from simple random samples.

Matched-pairs sampling. Sometimes, researchers may want to compare two subgroups within one population based on a specific criterion. For instance, why are some firms consistently more profitable than other firms? To conduct such a study, you would have to categorise a sampling frame of firms into ‘high profitable’ firms and ‘low profitable firms’ based on gross margins, earnings per share, or some other measure of profitability. You would then select a simple random sample of firms in one subgroup, and match each firm in this group with a firm in the second subgroup, based on its size, industry segment, and/or other matching criteria. Now, you have two matched samples of high-profitability and low-profitability firms that you can study in greater detail. Matched-pairs sampling techniques are often an ideal way of understanding bipolar differences between different subgroups within a given population.

Multi-stage sampling. The probability sampling techniques described previously are all examples of single-stage sampling techniques. Depending on your sampling needs, you may combine these single-stage techniques to conduct multi-stage sampling. For instance, you can stratify a list of businesses based on firm size, and then conduct systematic sampling within each stratum. This is a two-stage combination of stratified and systematic sampling. Likewise, you can start with a cluster of school districts in the state of New York, and within each cluster, select a simple random sample of schools. Within each school, you can select a simple random sample of grade levels, and within each grade level, you can select a simple random sample of students for study. In this case, you have a four-stage sampling process consisting of cluster and simple random sampling.

Non-probability sampling

Non-probability sampling is a sampling technique in which some units of the population have zero chance of selection or where the probability of selection cannot be accurately determined. Typically, units are selected based on certain non-random criteria, such as quota or convenience. Because selection is non-random, non-probability sampling does not allow the estimation of sampling errors, and may be subjected to a sampling bias. Therefore, information from a sample cannot be generalised back to the population. Types of non-probability sampling techniques include:

Convenience sampling. Also called accidental or opportunity sampling, this is a technique in which a sample is drawn from that part of the population that is close to hand, readily available, or convenient. For instance, if you stand outside a shopping centre and hand out questionnaire surveys to people or interview them as they walk in, the sample of respondents you will obtain will be a convenience sample. This is a non-probability sample because you are systematically excluding all people who shop at other shopping centres. The opinions that you would get from your chosen sample may reflect the unique characteristics of this shopping centre such as the nature of its stores (e.g., high end-stores will attract a more affluent demographic), the demographic profile of its patrons, or its location (e.g., a shopping centre close to a university will attract primarily university students with unique purchasing habits), and therefore may not be representative of the opinions of the shopper population at large. Hence, the scientific generalisability of such observations will be very limited. Other examples of convenience sampling are sampling students registered in a certain class or sampling patients arriving at a certain medical clinic. This type of sampling is most useful for pilot testing, where the goal is instrument testing or measurement validation rather than obtaining generalisable inferences.

Quota sampling. In this technique, the population is segmented into mutually exclusive subgroups (just as in stratified sampling), and then a non-random set of observations is chosen from each subgroup to meet a predefined quota. In proportional quota sampling , the proportion of respondents in each subgroup should match that of the population. For instance, if the American population consists of 70 per cent Caucasians, 15 per cent Hispanic-Americans, and 13 per cent African-Americans, and you wish to understand their voting preferences in an sample of 98 people, you can stand outside a shopping centre and ask people their voting preferences. But you will have to stop asking Hispanic-looking people when you have 15 responses from that subgroup (or African-Americans when you have 13 responses) even as you continue sampling other ethnic groups, so that the ethnic composition of your sample matches that of the general American population.

Non-proportional quota sampling is less restrictive in that you do not have to achieve a proportional representation, but perhaps meet a minimum size in each subgroup. In this case, you may decide to have 50 respondents from each of the three ethnic subgroups (Caucasians, Hispanic-Americans, and African-Americans), and stop when your quota for each subgroup is reached. Neither type of quota sampling will be representative of the American population, since depending on whether your study was conducted in a shopping centre in New York or Kansas, your results may be entirely different. The non-proportional technique is even less representative of the population, but may be useful in that it allows capturing the opinions of small and under-represented groups through oversampling.

Expert sampling. This is a technique where respondents are chosen in a non-random manner based on their expertise on the phenomenon being studied. For instance, in order to understand the impacts of a new governmental policy such as the Sarbanes-Oxley Act, you can sample a group of corporate accountants who are familiar with this Act. The advantage of this approach is that since experts tend to be more familiar with the subject matter than non-experts, opinions from a sample of experts are more credible than a sample that includes both experts and non-experts, although the findings are still not generalisable to the overall population at large.

Snowball sampling. In snowball sampling, you start by identifying a few respondents that match the criteria for inclusion in your study, and then ask them to recommend others they know who also meet your selection criteria. For instance, if you wish to survey computer network administrators and you know of only one or two such people, you can start with them and ask them to recommend others who also work in network administration. Although this method hardly leads to representative samples, it may sometimes be the only way to reach hard-to-reach populations or when no sampling frame is available.

Statistics of sampling

In the preceding sections, we introduced terms such as population parameter, sample statistic, and sampling bias. In this section, we will try to understand what these terms mean and how they are related to each other.

When you measure a certain observation from a given unit, such as a person’s response to a Likert-scaled item, that observation is called a response (see Figure 8.2). In other words, a response is a measurement value provided by a sampled unit. Each respondent will give you different responses to different items in an instrument. Responses from different respondents to the same item or observation can be graphed into a frequency distribution based on their frequency of occurrences. For a large number of responses in a sample, this frequency distribution tends to resemble a bell-shaped curve called a normal distribution , which can be used to estimate overall characteristics of the entire sample, such as sample mean (average of all observations in a sample) or standard deviation (variability or spread of observations in a sample). These sample estimates are called sample statistics (a ‘statistic’ is a value that is estimated from observed data). Populations also have means and standard deviations that could be obtained if we could sample the entire population. However, since the entire population can never be sampled, population characteristics are always unknown, and are called population parameters (and not ‘statistic’ because they are not statistically estimated from data). Sample statistics may differ from population parameters if the sample is not perfectly representative of the population. The difference between the two is called sampling error . Theoretically, if we could gradually increase the sample size so that the sample approaches closer and closer to the population, then sampling error will decrease and a sample statistic will increasingly approximate the corresponding population parameter.

If a sample is truly representative of the population, then the estimated sample statistics should be identical to the corresponding theoretical population parameters. How do we know if the sample statistics are at least reasonably close to the population parameters? Here, we need to understand the concept of a sampling distribution . Imagine that you took three different random samples from a given population, as shown in Figure 8.3, and for each sample, you derived sample statistics such as sample mean and standard deviation. If each random sample was truly representative of the population, then your three sample means from the three random samples will be identical—and equal to the population parameter—and the variability in sample means will be zero. But this is extremely unlikely, given that each random sample will likely constitute a different subset of the population, and hence, their means may be slightly different from each other. However, you can take these three sample means and plot a frequency histogram of sample means. If the number of such samples increases from three to 10 to 100, the frequency histogram becomes a sampling distribution. Hence, a sampling distribution is a frequency distribution of a sample statistic (like sample mean) from a set of samples , while the commonly referenced frequency distribution is the distribution of a response (observation) from a single sample . Just like a frequency distribution, the sampling distribution will also tend to have more sample statistics clustered around the mean (which presumably is an estimate of a population parameter), with fewer values scattered around the mean. With an infinitely large number of samples, this distribution will approach a normal distribution. The variability or spread of a sample statistic in a sampling distribution (i.e., the standard deviation of a sampling statistic) is called its standard error . In contrast, the term standard deviation is reserved for variability of an observed response from a single sample.

Sample statistic

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

2.2 Research Methods

Learning objectives.

By the end of this section, you should be able to:

  • Recall the 6 Steps of the Scientific Method
  • Differentiate between four kinds of research methods: surveys, field research, experiments, and secondary data analysis.
  • Explain the appropriateness of specific research approaches for specific topics.

Sociologists examine the social world, see a problem or interesting pattern, and set out to study it. They use research methods to design a study. Planning the research design is a key step in any sociological study. Sociologists generally choose from widely used methods of social investigation: primary source data collection such as survey, participant observation, ethnography, case study, unobtrusive observations, experiment, and secondary data analysis , or use of existing sources. Every research method comes with plusses and minuses, and the topic of study strongly influences which method or methods are put to use. When you are conducting research think about the best way to gather or obtain knowledge about your topic, think of yourself as an architect. An architect needs a blueprint to build a house, as a sociologist your blueprint is your research design including your data collection method.

When entering a particular social environment, a researcher must be careful. There are times to remain anonymous and times to be overt. There are times to conduct interviews and times to simply observe. Some participants need to be thoroughly informed; others should not know they are being observed. A researcher wouldn’t stroll into a crime-ridden neighborhood at midnight, calling out, “Any gang members around?”

Making sociologists’ presence invisible is not always realistic for other reasons. That option is not available to a researcher studying prison behaviors, early education, or the Ku Klux Klan. Researchers can’t just stroll into prisons, kindergarten classrooms, or Klan meetings and unobtrusively observe behaviors or attract attention. In situations like these, other methods are needed. Researchers choose methods that best suit their study topics, protect research participants or subjects, and that fit with their overall approaches to research.

As a research method, a survey collects data from subjects who respond to a series of questions about behaviors and opinions, often in the form of a questionnaire or an interview. The survey is one of the most widely used scientific research methods. The standard survey format allows individuals a level of anonymity in which they can express personal ideas.

At some point, most people in the United States respond to some type of survey. The 2020 U.S. Census is an excellent example of a large-scale survey intended to gather sociological data. Since 1790, United States has conducted a survey consisting of six questions to received demographical data pertaining to residents. The questions pertain to the demographics of the residents who live in the United States. Currently, the Census is received by residents in the United Stated and five territories and consists of 12 questions.

Not all surveys are considered sociological research, however, and many surveys people commonly encounter focus on identifying marketing needs and strategies rather than testing a hypothesis or contributing to social science knowledge. Questions such as, “How many hot dogs do you eat in a month?” or “Were the staff helpful?” are not usually designed as scientific research. The Nielsen Ratings determine the popularity of television programming through scientific market research. However, polls conducted by television programs such as American Idol or So You Think You Can Dance cannot be generalized, because they are administered to an unrepresentative population, a specific show’s audience. You might receive polls through your cell phones or emails, from grocery stores, restaurants, and retail stores. They often provide you incentives for completing the survey.

Sociologists conduct surveys under controlled conditions for specific purposes. Surveys gather different types of information from people. While surveys are not great at capturing the ways people really behave in social situations, they are a great method for discovering how people feel, think, and act—or at least how they say they feel, think, and act. Surveys can track preferences for presidential candidates or reported individual behaviors (such as sleeping, driving, or texting habits) or information such as employment status, income, and education levels.

A survey targets a specific population , people who are the focus of a study, such as college athletes, international students, or teenagers living with type 1 (juvenile-onset) diabetes. Most researchers choose to survey a small sector of the population, or a sample , a manageable number of subjects who represent a larger population. The success of a study depends on how well a population is represented by the sample. In a random sample , every person in a population has the same chance of being chosen for the study. As a result, a Gallup Poll, if conducted as a nationwide random sampling, should be able to provide an accurate estimate of public opinion whether it contacts 2,000 or 10,000 people.

After selecting subjects, the researcher develops a specific plan to ask questions and record responses. It is important to inform subjects of the nature and purpose of the survey up front. If they agree to participate, researchers thank subjects and offer them a chance to see the results of the study if they are interested. The researcher presents the subjects with an instrument, which is a means of gathering the information.

A common instrument is a questionnaire. Subjects often answer a series of closed-ended questions . The researcher might ask yes-or-no or multiple-choice questions, allowing subjects to choose possible responses to each question. This kind of questionnaire collects quantitative data —data in numerical form that can be counted and statistically analyzed. Just count up the number of “yes” and “no” responses or correct answers, and chart them into percentages.

Questionnaires can also ask more complex questions with more complex answers—beyond “yes,” “no,” or checkbox options. These types of inquiries use open-ended questions that require short essay responses. Participants willing to take the time to write those answers might convey personal religious beliefs, political views, goals, or morals. The answers are subjective and vary from person to person. How do you plan to use your college education?

Some topics that investigate internal thought processes are impossible to observe directly and are difficult to discuss honestly in a public forum. People are more likely to share honest answers if they can respond to questions anonymously. This type of personal explanation is qualitative data —conveyed through words. Qualitative information is harder to organize and tabulate. The researcher will end up with a wide range of responses, some of which may be surprising. The benefit of written opinions, though, is the wealth of in-depth material that they provide.

An interview is a one-on-one conversation between the researcher and the subject, and it is a way of conducting surveys on a topic. However, participants are free to respond as they wish, without being limited by predetermined choices. In the back-and-forth conversation of an interview, a researcher can ask for clarification, spend more time on a subtopic, or ask additional questions. In an interview, a subject will ideally feel free to open up and answer questions that are often complex. There are no right or wrong answers. The subject might not even know how to answer the questions honestly.

Questions such as “How does society’s view of alcohol consumption influence your decision whether or not to take your first sip of alcohol?” or “Did you feel that the divorce of your parents would put a social stigma on your family?” involve so many factors that the answers are difficult to categorize. A researcher needs to avoid steering or prompting the subject to respond in a specific way; otherwise, the results will prove to be unreliable. The researcher will also benefit from gaining a subject’s trust, from empathizing or commiserating with a subject, and from listening without judgment.

Surveys often collect both quantitative and qualitative data. For example, a researcher interviewing people who are incarcerated might receive quantitative data, such as demographics – race, age, sex, that can be analyzed statistically. For example, the researcher might discover that 20 percent of incarcerated people are above the age of 50. The researcher might also collect qualitative data, such as why people take advantage of educational opportunities during their sentence and other explanatory information.

The survey can be carried out online, over the phone, by mail, or face-to-face. When researchers collect data outside a laboratory, library, or workplace setting, they are conducting field research, which is our next topic.

Field Research

The work of sociology rarely happens in limited, confined spaces. Rather, sociologists go out into the world. They meet subjects where they live, work, and play. Field research refers to gathering primary data from a natural environment. To conduct field research, the sociologist must be willing to step into new environments and observe, participate, or experience those worlds. In field work, the sociologists, rather than the subjects, are the ones out of their element.

The researcher interacts with or observes people and gathers data along the way. The key point in field research is that it takes place in the subject’s natural environment, whether it’s a coffee shop or tribal village, a homeless shelter or the DMV, a hospital, airport, mall, or beach resort.

While field research often begins in a specific setting , the study’s purpose is to observe specific behaviors in that setting. Field work is optimal for observing how people think and behave. It seeks to understand why they behave that way. However, researchers may struggle to narrow down cause and effect when there are so many variables floating around in a natural environment. And while field research looks for correlation, its small sample size does not allow for establishing a causal relationship between two variables. Indeed, much of the data gathered in sociology do not identify a cause and effect but a correlation .

Sociology in the Real World

Beyoncé and lady gaga as sociological subjects.

Sociologists have studied Lady Gaga and Beyoncé and their impact on music, movies, social media, fan participation, and social equality. In their studies, researchers have used several research methods including secondary analysis, participant observation, and surveys from concert participants.

In their study, Click, Lee & Holiday (2013) interviewed 45 Lady Gaga fans who utilized social media to communicate with the artist. These fans viewed Lady Gaga as a mirror of themselves and a source of inspiration. Like her, they embrace not being a part of mainstream culture. Many of Lady Gaga’s fans are members of the LGBTQ community. They see the “song “Born This Way” as a rallying cry and answer her calls for “Paws Up” with a physical expression of solidarity—outstretched arms and fingers bent and curled to resemble monster claws.”

Sascha Buchanan (2019) made use of participant observation to study the relationship between two fan groups, that of Beyoncé and that of Rihanna. She observed award shows sponsored by iHeartRadio, MTV EMA, and BET that pit one group against another as they competed for Best Fan Army, Biggest Fans, and FANdemonium. Buchanan argues that the media thus sustains a myth of rivalry between the two most commercially successful Black women vocal artists.

Participant Observation

In 2000, a comic writer named Rodney Rothman wanted an insider’s view of white-collar work. He slipped into the sterile, high-rise offices of a New York “dot com” agency. Every day for two weeks, he pretended to work there. His main purpose was simply to see whether anyone would notice him or challenge his presence. No one did. The receptionist greeted him. The employees smiled and said good morning. Rothman was accepted as part of the team. He even went so far as to claim a desk, inform the receptionist of his whereabouts, and attend a meeting. He published an article about his experience in The New Yorker called “My Fake Job” (2000). Later, he was discredited for allegedly fabricating some details of the story and The New Yorker issued an apology. However, Rothman’s entertaining article still offered fascinating descriptions of the inside workings of a “dot com” company and exemplified the lengths to which a writer, or a sociologist, will go to uncover material.

Rothman had conducted a form of study called participant observation , in which researchers join people and participate in a group’s routine activities for the purpose of observing them within that context. This method lets researchers experience a specific aspect of social life. A researcher might go to great lengths to get a firsthand look into a trend, institution, or behavior. A researcher might work as a waitress in a diner, experience homelessness for several weeks, or ride along with police officers as they patrol their regular beat. Often, these researchers try to blend in seamlessly with the population they study, and they may not disclose their true identity or purpose if they feel it would compromise the results of their research.

At the beginning of a field study, researchers might have a question: “What really goes on in the kitchen of the most popular diner on campus?” or “What is it like to be homeless?” Participant observation is a useful method if the researcher wants to explore a certain environment from the inside.

Field researchers simply want to observe and learn. In such a setting, the researcher will be alert and open minded to whatever happens, recording all observations accurately. Soon, as patterns emerge, questions will become more specific, observations will lead to hypotheses, and hypotheses will guide the researcher in analyzing data and generating results.

In a study of small towns in the United States conducted by sociological researchers John S. Lynd and Helen Merrell Lynd, the team altered their purpose as they gathered data. They initially planned to focus their study on the role of religion in U.S. towns. As they gathered observations, they realized that the effect of industrialization and urbanization was the more relevant topic of this social group. The Lynds did not change their methods, but they revised the purpose of their study.

This shaped the structure of Middletown: A Study in Modern American Culture , their published results (Lynd & Lynd, 1929).

The Lynds were upfront about their mission. The townspeople of Muncie, Indiana, knew why the researchers were in their midst. But some sociologists prefer not to alert people to their presence. The main advantage of covert participant observation is that it allows the researcher access to authentic, natural behaviors of a group’s members. The challenge, however, is gaining access to a setting without disrupting the pattern of others’ behavior. Becoming an inside member of a group, organization, or subculture takes time and effort. Researchers must pretend to be something they are not. The process could involve role playing, making contacts, networking, or applying for a job.

Once inside a group, some researchers spend months or even years pretending to be one of the people they are observing. However, as observers, they cannot get too involved. They must keep their purpose in mind and apply the sociological perspective. That way, they illuminate social patterns that are often unrecognized. Because information gathered during participant observation is mostly qualitative, rather than quantitative, the end results are often descriptive or interpretive. The researcher might present findings in an article or book and describe what he or she witnessed and experienced.

This type of research is what journalist Barbara Ehrenreich conducted for her book Nickel and Dimed . One day over lunch with her editor, Ehrenreich mentioned an idea. How can people exist on minimum-wage work? How do low-income workers get by? she wondered. Someone should do a study . To her surprise, her editor responded, Why don’t you do it?

That’s how Ehrenreich found herself joining the ranks of the working class. For several months, she left her comfortable home and lived and worked among people who lacked, for the most part, higher education and marketable job skills. Undercover, she applied for and worked minimum wage jobs as a waitress, a cleaning woman, a nursing home aide, and a retail chain employee. During her participant observation, she used only her income from those jobs to pay for food, clothing, transportation, and shelter.

She discovered the obvious, that it’s almost impossible to get by on minimum wage work. She also experienced and observed attitudes many middle and upper-class people never think about. She witnessed firsthand the treatment of working class employees. She saw the extreme measures people take to make ends meet and to survive. She described fellow employees who held two or three jobs, worked seven days a week, lived in cars, could not pay to treat chronic health conditions, got randomly fired, submitted to drug tests, and moved in and out of homeless shelters. She brought aspects of that life to light, describing difficult working conditions and the poor treatment that low-wage workers suffer.

The book she wrote upon her return to her real life as a well-paid writer, has been widely read and used in many college classrooms.

Ethnography

Ethnography is the immersion of the researcher in the natural setting of an entire social community to observe and experience their everyday life and culture. The heart of an ethnographic study focuses on how subjects view their own social standing and how they understand themselves in relation to a social group.

An ethnographic study might observe, for example, a small U.S. fishing town, an Inuit community, a village in Thailand, a Buddhist monastery, a private boarding school, or an amusement park. These places all have borders. People live, work, study, or vacation within those borders. People are there for a certain reason and therefore behave in certain ways and respect certain cultural norms. An ethnographer would commit to spending a determined amount of time studying every aspect of the chosen place, taking in as much as possible.

A sociologist studying a tribe in the Amazon might watch the way villagers go about their daily lives and then write a paper about it. To observe a spiritual retreat center, an ethnographer might sign up for a retreat and attend as a guest for an extended stay, observe and record data, and collate the material into results.

Institutional Ethnography

Institutional ethnography is an extension of basic ethnographic research principles that focuses intentionally on everyday concrete social relationships. Developed by Canadian sociologist Dorothy E. Smith (1990), institutional ethnography is often considered a feminist-inspired approach to social analysis and primarily considers women’s experiences within male- dominated societies and power structures. Smith’s work is seen to challenge sociology’s exclusion of women, both academically and in the study of women’s lives (Fenstermaker, n.d.).

Historically, social science research tended to objectify women and ignore their experiences except as viewed from the male perspective. Modern feminists note that describing women, and other marginalized groups, as subordinates helps those in authority maintain their own dominant positions (Social Sciences and Humanities Research Council of Canada n.d.). Smith’s three major works explored what she called “the conceptual practices of power” and are still considered seminal works in feminist theory and ethnography (Fensternmaker n.d.).

Sociological Research

The making of middletown: a study in modern u.s. culture.

In 1924, a young married couple named Robert and Helen Lynd undertook an unprecedented ethnography: to apply sociological methods to the study of one U.S. city in order to discover what “ordinary” people in the United States did and believed. Choosing Muncie, Indiana (population about 30,000) as their subject, they moved to the small town and lived there for eighteen months.

Ethnographers had been examining other cultures for decades—groups considered minorities or outsiders—like gangs, immigrants, and the poor. But no one had studied the so-called average American.

Recording interviews and using surveys to gather data, the Lynds objectively described what they observed. Researching existing sources, they compared Muncie in 1890 to the Muncie they observed in 1924. Most Muncie adults, they found, had grown up on farms but now lived in homes inside the city. As a result, the Lynds focused their study on the impact of industrialization and urbanization.

They observed that Muncie was divided into business and working class groups. They defined business class as dealing with abstract concepts and symbols, while working class people used tools to create concrete objects. The two classes led different lives with different goals and hopes. However, the Lynds observed, mass production offered both classes the same amenities. Like wealthy families, the working class was now able to own radios, cars, washing machines, telephones, vacuum cleaners, and refrigerators. This was an emerging material reality of the 1920s.

As the Lynds worked, they divided their manuscript into six chapters: Getting a Living, Making a Home, Training the Young, Using Leisure, Engaging in Religious Practices, and Engaging in Community Activities.

When the study was completed, the Lynds encountered a big problem. The Rockefeller Foundation, which had commissioned the book, claimed it was useless and refused to publish it. The Lynds asked if they could seek a publisher themselves.

Middletown: A Study in Modern American Culture was not only published in 1929 but also became an instant bestseller, a status unheard of for a sociological study. The book sold out six printings in its first year of publication, and has never gone out of print (Caplow, Hicks, & Wattenberg. 2000).

Nothing like it had ever been done before. Middletown was reviewed on the front page of the New York Times. Readers in the 1920s and 1930s identified with the citizens of Muncie, Indiana, but they were equally fascinated by the sociological methods and the use of scientific data to define ordinary people in the United States. The book was proof that social data was important—and interesting—to the U.S. public.

Sometimes a researcher wants to study one specific person or event. A case study is an in-depth analysis of a single event, situation, or individual. To conduct a case study, a researcher examines existing sources like documents and archival records, conducts interviews, engages in direct observation and even participant observation, if possible.

Researchers might use this method to study a single case of a foster child, drug lord, cancer patient, criminal, or rape victim. However, a major criticism of the case study as a method is that while offering depth on a topic, it does not provide enough evidence to form a generalized conclusion. In other words, it is difficult to make universal claims based on just one person, since one person does not verify a pattern. This is why most sociologists do not use case studies as a primary research method.

However, case studies are useful when the single case is unique. In these instances, a single case study can contribute tremendous insight. For example, a feral child, also called “wild child,” is one who grows up isolated from human beings. Feral children grow up without social contact and language, which are elements crucial to a “civilized” child’s development. These children mimic the behaviors and movements of animals, and often invent their own language. There are only about one hundred cases of “feral children” in the world.

As you may imagine, a feral child is a subject of great interest to researchers. Feral children provide unique information about child development because they have grown up outside of the parameters of “normal” growth and nurturing. And since there are very few feral children, the case study is the most appropriate method for researchers to use in studying the subject.

At age three, a Ukranian girl named Oxana Malaya suffered severe parental neglect. She lived in a shed with dogs, and she ate raw meat and scraps. Five years later, a neighbor called authorities and reported seeing a girl who ran on all fours, barking. Officials brought Oxana into society, where she was cared for and taught some human behaviors, but she never became fully socialized. She has been designated as unable to support herself and now lives in a mental institution (Grice 2011). Case studies like this offer a way for sociologists to collect data that may not be obtained by any other method.

Experiments

You have probably tested some of your own personal social theories. “If I study at night and review in the morning, I’ll improve my retention skills.” Or, “If I stop drinking soda, I’ll feel better.” Cause and effect. If this, then that. When you test the theory, your results either prove or disprove your hypothesis.

One way researchers test social theories is by conducting an experiment , meaning they investigate relationships to test a hypothesis—a scientific approach.

There are two main types of experiments: lab-based experiments and natural or field experiments. In a lab setting, the research can be controlled so that more data can be recorded in a limited amount of time. In a natural or field- based experiment, the time it takes to gather the data cannot be controlled but the information might be considered more accurate since it was collected without interference or intervention by the researcher.

As a research method, either type of sociological experiment is useful for testing if-then statements: if a particular thing happens (cause), then another particular thing will result (effect). To set up a lab-based experiment, sociologists create artificial situations that allow them to manipulate variables.

Classically, the sociologist selects a set of people with similar characteristics, such as age, class, race, or education. Those people are divided into two groups. One is the experimental group and the other is the control group. The experimental group is exposed to the independent variable(s) and the control group is not. To test the benefits of tutoring, for example, the sociologist might provide tutoring to the experimental group of students but not to the control group. Then both groups would be tested for differences in performance to see if tutoring had an effect on the experimental group of students. As you can imagine, in a case like this, the researcher would not want to jeopardize the accomplishments of either group of students, so the setting would be somewhat artificial. The test would not be for a grade reflected on their permanent record of a student, for example.

And if a researcher told the students they would be observed as part of a study on measuring the effectiveness of tutoring, the students might not behave naturally. This is called the Hawthorne effect —which occurs when people change their behavior because they know they are being watched as part of a study. The Hawthorne effect is unavoidable in some research studies because sociologists have to make the purpose of the study known. Subjects must be aware that they are being observed, and a certain amount of artificiality may result (Sonnenfeld 1985).

A real-life example will help illustrate the process. In 1971, Frances Heussenstamm, a sociology professor at California State University at Los Angeles, had a theory about police prejudice. To test her theory, she conducted research. She chose fifteen students from three ethnic backgrounds: Black, White, and Hispanic. She chose students who routinely drove to and from campus along Los Angeles freeway routes, and who had had perfect driving records for longer than a year.

Next, she placed a Black Panther bumper sticker on each car. That sticker, a representation of a social value, was the independent variable. In the 1970s, the Black Panthers were a revolutionary group actively fighting racism. Heussenstamm asked the students to follow their normal driving patterns. She wanted to see whether seeming support for the Black Panthers would change how these good drivers were treated by the police patrolling the highways. The dependent variable would be the number of traffic stops/citations.

The first arrest, for an incorrect lane change, was made two hours after the experiment began. One participant was pulled over three times in three days. He quit the study. After seventeen days, the fifteen drivers had collected a total of thirty-three traffic citations. The research was halted. The funding to pay traffic fines had run out, and so had the enthusiasm of the participants (Heussenstamm, 1971).

Secondary Data Analysis

While sociologists often engage in original research studies, they also contribute knowledge to the discipline through secondary data analysis . Secondary data does not result from firsthand research collected from primary sources, but are the already completed work of other researchers or data collected by an agency or organization. Sociologists might study works written by historians, economists, teachers, or early sociologists. They might search through periodicals, newspapers, or magazines, or organizational data from any period in history.

Using available information not only saves time and money but can also add depth to a study. Sociologists often interpret findings in a new way, a way that was not part of an author’s original purpose or intention. To study how women were encouraged to act and behave in the 1960s, for example, a researcher might watch movies, televisions shows, and situation comedies from that period. Or to research changes in behavior and attitudes due to the emergence of television in the late 1950s and early 1960s, a sociologist would rely on new interpretations of secondary data. Decades from now, researchers will most likely conduct similar studies on the advent of mobile phones, the Internet, or social media.

Social scientists also learn by analyzing the research of a variety of agencies. Governmental departments and global groups, like the U.S. Bureau of Labor Statistics or the World Health Organization (WHO), publish studies with findings that are useful to sociologists. A public statistic like the foreclosure rate might be useful for studying the effects of a recession. A racial demographic profile might be compared with data on education funding to examine the resources accessible by different groups.

One of the advantages of secondary data like old movies or WHO statistics is that it is nonreactive research (or unobtrusive research), meaning that it does not involve direct contact with subjects and will not alter or influence people’s behaviors. Unlike studies requiring direct contact with people, using previously published data does not require entering a population and the investment and risks inherent in that research process.

Using available data does have its challenges. Public records are not always easy to access. A researcher will need to do some legwork to track them down and gain access to records. To guide the search through a vast library of materials and avoid wasting time reading unrelated sources, sociologists employ content analysis , applying a systematic approach to record and value information gleaned from secondary data as they relate to the study at hand.

Also, in some cases, there is no way to verify the accuracy of existing data. It is easy to count how many drunk drivers, for example, are pulled over by the police. But how many are not? While it’s possible to discover the percentage of teenage students who drop out of high school, it might be more challenging to determine the number who return to school or get their GED later.

Another problem arises when data are unavailable in the exact form needed or do not survey the topic from the precise angle the researcher seeks. For example, the average salaries paid to professors at a public school is public record. But these figures do not necessarily reveal how long it took each professor to reach the salary range, what their educational backgrounds are, or how long they’ve been teaching.

When conducting content analysis, it is important to consider the date of publication of an existing source and to take into account attitudes and common cultural ideals that may have influenced the research. For example, when Robert S. Lynd and Helen Merrell Lynd gathered research in the 1920s, attitudes and cultural norms were vastly different then than they are now. Beliefs about gender roles, race, education, and work have changed significantly since then. At the time, the study’s purpose was to reveal insights about small U.S. communities. Today, it is an illustration of 1920s attitudes and values.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/introduction-sociology-3e/pages/1-introduction
  • Authors: Tonja R. Conerly, Kathleen Holmes, Asha Lal Tamang
  • Publisher/website: OpenStax
  • Book title: Introduction to Sociology 3e
  • Publication date: Jun 3, 2021
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/introduction-sociology-3e/pages/1-introduction
  • Section URL: https://openstax.org/books/introduction-sociology-3e/pages/2-2-research-methods

© Jan 18, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Research-Methodology

Deductive Approach (Deductive Reasoning)

A deductive approach is concerned with “developing a hypothesis (or hypotheses) based on existing theory, and then designing a research strategy to test the hypothesis” [1]

It has been stated that “deductive means reasoning from the particular to the general. If a causal relationship or link seems to be implied by a particular theory or case example, it might be true in many cases. A deductive design might test to see if this relationship or link did obtain on more general circumstances” [2] .

Deductive approach can be explained by the means of hypotheses, which can be derived from the propositions of the theory. In other words, deductive approach is concerned with deducting conclusions from premises or propositions.

Deduction begins with an expected pattern “that is tested against observations, whereas induction begins with observations and seeks to find a pattern within them” [3] .

Advantages of Deductive Approach

Deductive approach offers the following advantages:

  • Possibility to explain causal relationships between concepts and variables
  • Possibility to measure concepts quantitatively
  • Possibility to generalize research findings to a certain extent

Alternative to deductive approach is  inductive approach.  The table below guides the choice of specific approach depending on circumstances:

Choice between deductive and inductive approaches

Deductive research approach explores a known theory or phenomenon and tests if that theory is valid in given circumstances. It has been noted that “the deductive approach follows the path of logic most closely. The reasoning starts with a theory and leads to a new hypothesis. This hypothesis is put to the test by confronting it with observations that either lead to a confirmation or a rejection of the hypothesis” [4] .

Moreover, deductive reasoning can be explained as “reasoning from the general to the particular” [5] , whereas inductive reasoning is the opposite. In other words, deductive approach involves formulation of hypotheses and their subjection to testing during the research process, while inductive studies do not deal with hypotheses in any ways.

Application of Deductive Approach (Deductive Reasoning) in Business Research

In studies with deductive approach, the researcher formulates a set of hypotheses at the start of the research. Then, relevant research methods are chosen and applied to test the hypotheses to prove them right or wrong.

Deductive Approach Deductive Reasoning

Generally, studies using deductive approach follow the following stages:

  • Deducing  hypothesis from theory.
  • Formulating  hypothesis in operational terms and proposing relationships between two specific variables
  • Testing  hypothesis with the application of relevant method(s). These are quantitative methods such as regression and correlation analysis, mean, mode and median and others.
  • Examining  the outcome of the test, and thus confirming or rejecting the theory. When analysing the outcome of tests, it is important to compare research findings with the literature review findings.
  • Modifying  theory in instances when hypothesis is not confirmed.

My e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance  contains discussions of theory and application of research approaches. The e-book also explains all stages of the  research process  starting from the  selection of the research area  to writing personal reflection. Important elements of dissertations such as  research philosophy ,   research design ,  methods of data collection ,   data analysis  and   sampling   are explained in this e-book in simple words.

John Dudovskiy

Deductive Approach (Deductive Reasoning)

[1] Wilson, J. (2010) “Essentials of Business Research: A Guide to Doing Your Research Project” SAGE Publications, p.7

[2] Gulati, PM, 2009, Research Management: Fundamental and Applied Research, Global India Publications, p.42

[3] Babbie, E. R. (2010) “The Practice of Social Research” Cengage Learning, p.52

[4] Snieder, R. & Larner, K. (2009) “The Art of Being a Scientist: A Guide for Graduate Students and their Mentors”, Cambridge University Press, p.16

[5] Pelissier, R. (2008) “Business Research Made Easy” Juta & Co., p.3

ReviseSociology

A level sociology revision – education, families, research methods, crime and deviance and more!

Social Surveys – Strengths and Limitations

Table of Contents

Last Updated on November 2, 2023 by Karl Thompson

Social Surveys are a quantitative, positivist research method consisting of structured questionnaires and interviews. This post considers the theoretical, practical and ethical advantages and disadvantages of using social surveys in social research. 

The strengths and limitations below are mainly based around surveys administered as self-completion questionnaires.

Social Surveys.png

Theoretical Factors

slide showing the theoretical strengths and limitations of social surveys

Theoretical strengths of social surveys

Detachment, objectivity and validity.

Positivists favour questionnaires because they are a detached and objective (unbiased) method, where the sociologist’s personal involvement with respondents is kept to a minimum.

Hypothesis Testing

Questionnaires are particularly useful for testing hypotheses about cause and effect relationships between different variables, because the fact that they are quantifiable allows us to find correlations.

For example, based on government statistics on educational achievement we know that white boys on Free School Meals achieve at a significantly lower level than Chinese girls on Free School Meals. We reasonably hypothesise that this is because differences in parental attitudes – Chinese parents may value education more highly, and they may be stricter with their children when it comes to homework compared to white parents. Good questionnaire design and appropriate sampling would enable us to test out this hypothesis. Good sampling would further allow us to see if those white working class boys who do well have parents with similar attitudes to those Chinese girls who do well.

Representativeness

Questionnaires allow the researcher to collect information from a large number of people, so the results should be more representative of the wider population than with more qualitative methods. However, this all depends on appropriate sampling techniques being used and the researchers having knowledge of how actually completes the questionnaire.

Reliability

Questionnaires are generally seen as one of the more reliable methods of data collection – if repeated by another researcher, then they should give similar results. There are two main reasons for this:

When the research is repeated, it is easy to use the exact same questionnaire meaning the respondents are asked the exact same questions in the same order and they have the same choice of answers.

With self-completion questions, especially those sent by post, there is no researcher present to influence the results.

The reliability of questionnaires means that if we do find differences in answers, then we can be reasonably certain that this is because the opinions of the respondents have changed over time. For this reason, questionnaires are a good method for conducting longitudinal research where change over time is measured.

advantages of hypothesis in social research

Theoretical Limitations

Issues affecting validity – Interpretivists make a number of criticisms of questionnaires .

The Imposition Problem

The imposition problem is when the researcher chooses the questions, they are deciding what is important rather than the respondent, and with closed ended questions the respondent has to fit their answers into what’s on offer. The result is that the respondent may not be able to express themselves in the way that want to. The structure of the questionnaire thus distorts the respondents’ meanings and undermines the validity of the data.

Misinterpetation of questions

Interpretivists argue that the detached nature of questionnaires and the lack of close contact between researcher and respondent means that there is no way to guarantee that the respondents are interpreting the questions in the same way as the researcher. This is especially true where very complex topics are involved – If I tick ‘yes’ that I am Christian’ – this could mean a range of things – from my being baptised but not practising or really believing to being a devout Fundamentalist. For this reason Interpretivists typically prefer qualitative methods where researchers are present to clarify meanings and probe deeper.

Researchers may not be present to check whether respondents are giving s ocially desirable answers , or simply lying, or even to check who is actually completing the questionnaire. At least with interviews researchers are present to check up on these problems (by observing body language or probing further for example).

Issues affecting representativeness

Postal questionnaires in particular can suffer from a low response rate. For example, Shere Hite’s (1991) study of ‘love, passion, and emotional violence’ in the America sent out 100, 000 questionnaires but only 4.5% of them were returned.

All self-completion questionnaires also suffer from the problem of a self-selecting sample which makes the research unrepresentative – certain types of people are more likely to complete questionnaires – literate people for example, people with plenty of time, or people who get a positive sense of self-esteem when completing questionnaires.

Practical Factors

Slide showing the practical strengths and limitations of social surveys.

Practical Strengths of Social Surveys

Questionnaires are a quick and cheap means of gathering large amounts of data from large numbers of people, even if they are widely dispersed geographically if the questionnaire is sent by post or conducted online. It is difficult to see how any other research method could provide 10s of millions of responses as is the case with the UK national census.

In the context of education, Connor and Dewson (2001) posted nearly 4000 questionnaires to students at 14 higher education institutions in their study of the factors which influenced working class decisions to attend university.

With self-completion questionnaires there is no need to recruit and train interviewers, which reduces cost.

The data is quick to analyse once it has been collected. With online questionnaires, pre-coded questions can be updated live.

Practical Limitations

The fact that questionnaires need to be brief means you can only ever get relatively superficial data from them, thus for many topics, they will need to be combined with more qualitative methods to achieve more insight.

Although questionnaires are a relatively cheap form of gathering data, it might be necessary to offer incentives for people to return them.

Structured Interviews are also considerably more expensive than self-completion questionnaires.

Ethical Factors

slide showing the ethical strengths and limtiations of social surveys

Ethical strengths of surveys

When a respondent is presented with a questionnaire, it is fairly obvious that research is taken place, so informed consent isn’t normally an issue as long as researchers are honest about the purpose of the research.

It is also a relatively unobtrusive method, given the detachment of the researcher, and it is quite an easy matter for respondents to just ignore questionnaires if they don’t want to complete them.

Ethical Limitations

They are best avoided when researching sensitive topics.

Related Posts 

An Introduction to Social Surveys – Definition and Basic Types of Survey

Positivism, Sociology and Social Research – Positivists like the survey method.

Please click here for more posts on research methods .

Share this:

  • Share on Tumblr

7 thoughts on “Social Surveys – Strengths and Limitations”

  • Pingback: Factors Affecting Choice of Research Methods – ReviseSociology
  • Pingback: What the public thinks of Boris Johnson – ReviseSociology

Hey there, Lots of love and best wishes from India. I have no training in sociology what so ever and is having very crucial exam which will require graduation level command over the subject. With paucity of time somehow found this gem of a site. I almost have never commented online , but this is love. Best wishes….

  • Pingback: Better the devil you know: what role should teachers play in questioning, class discussion and debate? – Herts & Bucks TSA Blog
  • Pingback: Better the devil you know: What role should teachers play in questioning, class discussion and debate? – Tales From The Reach
  • Pingback: YouGov Surveys – What the World Thinks? | ReviseSociology
  • Pingback: Social Surveys – An Introduction to Structured Questionnaires and Structured Interviews | ReviseSociology

Leave a Reply Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Discover more from ReviseSociology

Subscribe now to keep reading and get access to the full archive.

Continue reading

advantages of hypothesis in social research

Logo

Advantages and Disadvantages of Hypothesis

Looking for advantages and disadvantages of Hypothesis?

We have collected some solid points that will help you understand the pros and cons of Hypothesis in detail.

But first, let’s understand the topic:

What is Hypothesis?

A hypothesis is a smart guess or idea that you can test. It’s like a possible answer to a question that you can check through experiments or observations. It helps to guide investigations and learn new things.

What are the advantages and disadvantages of Hypothesis

The following are the advantages and disadvantages of Hypothesis:

Advantages and disadvantages of Hypothesis

Advantages of Hypothesis

  • Guides research direction – A hypothesis can act like a compass, steering the course of research towards relevant data and information.
  • Simplifies data interpretation – It also makes it easier to understand data by providing a framework for its analysis and interpretation.
  • Encourages critical thinking – By posing a question or a proposition, a hypothesis stimulates our minds to think critically and analytically.
  • Helps in prediction making – It also assists us in making future predictions by setting a precedent based on the current study or research.
  • Supports scientific exploration – Moreover, a hypothesis is a fundamental part of scientific exploration, promoting investigation and discovery.

Disadvantages of Hypothesis

  • Can limit creative thinking – Hypotheses can sometimes restrict out-of-the-box thinking as they set a predefined path for research.
  • May lead to confirmation bias – They might also cause confirmation bias, where researchers only seek data that supports their hypothesis, ignoring contradicting evidence.
  • Not always accurately predictive – Predictions made by a hypothesis are not always accurate, leading to potential errors in research conclusions.
  • Can be time-consuming to develop – The process of developing a well-structured hypothesis can be lengthy and require significant effort.
  • May overlook unexpected outcomes – Lastly, hypotheses can cause researchers to miss unexpected results as they focus solely on the proposed outcomes.
  • Advantages and disadvantages of IC Engine
  • Advantages and disadvantages of Ice Baths
  • Advantages and disadvantages of Ice Cream

You can view other “advantages and disadvantages of…” posts by clicking here .

If you have a related query, feel free to let us know in the comments below.

Also, kindly share the information with your friends who you think might be interested in reading it.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

advantages of hypothesis in social research

Unlocking-the-future.com

6 Advantages of Hypothesis in Social Research

Hypotheses are of different types and kinds and it is not easy to develop a good hypothesis. But a question arises as to what is its utility in social research. There is not one but many advantages of hypothesis in social research. These are:

1. It is with the help of hypothesis, that it becomes easy to decide as to what type of data is to be collected and what type of data is simply to be ignored.

We Will Write a Custom Essay Specifically For You For Only $13.90/page!

ADVERTISEMENTS:

2. Hypothesis makes it clear as what is to be accepted, proved or disproved and that what is the main focus of study.

3. It helps the investigator in knowing the direction in which he is to move. Without hypothesis it will be just duping in the dark and not moving in the right direction.

Image Source : cep-probation.org

4. A clear idea about hypothesis means saving of time money and energy which otherwise will be wasted, thereby botheration of trial and error will be saved.

5. It helps in concentrating only on relevant factors and dropping irrelevant ones. Many irrelevant factors which otherwise get into the study can easily be ignored.

6. A properly formulate hypothesis is always essential for drawing proper and reason­able conclusions.

Hypothesis in brief, is the pivot of the whole study. Without well formulated hypothesis the whole study will be out of focus and it will be difficult to drawn rights and proper conclusions. In fact, hypothesis is a necessary link between theory and investigation which will result in the addition of existing knowledge.

Related posts:

  • What are the Six Important Steps of Marketing Research Process?
  • What are the 5 Major Limitations of Marketing Research?
  • Difference between “Fundamental Research”, “Applied Research” and “Action Research”
  • 12 Important Characteristics of Behavioural Approach to the Study of Politics
  • What are the Uses of Epidemiology? – Explained!
  • 14 Precautions A Researcher Should Keep in Mind While Preparing the Research Report
  • 13 Limitations of Behaviouralism Approach to Study of Politics
  • Relationship of Public Administration with Law – Essay
  • The meaning and aim of comparative education– Essay
  • 4 Important Relationships between “Political Sociology” and “Political Science”

Haven't Found The Essay You Want?

For Only $13.90/page

advantages of hypothesis in social research

Hi! I'm Jack!

Would you like to get a custom essay? How about receiving a customized one?

IMAGES

  1. 13 Different Types of Hypothesis (2024)

    advantages of hypothesis in social research

  2. Understanding the importance of a research hypothesis

    advantages of hypothesis in social research

  3. ROLE OF HYPOTHESIS IN SOCIAL RESEARCH

    advantages of hypothesis in social research

  4. Understanding the importance of a research hypothesis

    advantages of hypothesis in social research

  5. Research Hypothesis: Definition, Types, Examples and Quick Tips

    advantages of hypothesis in social research

  6. Research Hypothesis: Definition, Types, Examples and Quick Tips

    advantages of hypothesis in social research

VIDEO

  1. Sahulat

  2. What Is A Hypothesis?

  3. Webinar: Collaborative Reading. How Social Annotation Can Transform Your JSTOR Experience

  4. The Social Psychology of The Furry Fandom

  5. Research Hypothesis and its Types with examples /urdu/hindi

  6. Using Hypothesis Social Annotation with Large Courses, Part 2

COMMENTS

  1. A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  2. 8 Advantages of the Hypothesis

    Social Work Research and Evidence-based Practice. Welfare and Benefit Systems. Sociology Childhood Studies. Community Development ... Second, the chapter goes into many cognitive advantages of hypothesis-based research that exist because the human mind is inherently and continually at work trying to understand the world. The hypothesis is a ...

  3. Social Research: Definitions, Types, Nature, and Characteristics

    Abstract. Social research is often defined as a study of mankind that helps to identify the relations between social life and social systems. This kind of research usually creates new knowledge and theories or tests and verifies existing theories. However, social research is a broad spectrum that requires a discursive understanding of its ...

  4. 2.1 Approaches to Sociological Research

    A hypothesis is an explanation for a phenomenon based on a conjecture about the relationship between the phenomenon and one or more causal factors. In sociology, the hypothesis will often predict how one form of human behavior influences another. For example, a hypothesis might be in the form of an "if, then statement."

  5. 3.4 Hypotheses

    3.4 Hypotheses. When researchers do not have predictions about what they will find, they conduct research to answer a question or questions with an open-minded desire to know about a topic, or to help develop hypotheses for later testing. In other situations, the purpose of research is to test a specific hypothesis or hypotheses.

  6. The state of the art of hypothesis testing in the social sciences

    Abstract. Over many decades, one seemingly fatal critique after another has been launched against the use of social sciences' dominant practice of null-hypothesis significance testing, also known as NHST. In the last decade, we have witnessed a further upsurge in this critique, associated with suggestions as to how to conduct quantitative ...

  7. 1.3 Conducting Research in Social Psychology

    The Research Hypothesis. Because social psychologists are generally interested in looking at relationships among variables, they begin by stating their predictions in the form of a precise statement known as a research hypothesis. ... One advantage of correlational research designs is that, like observational research (and in comparison with ...

  8. 2.1C: Formulating the Hypothesis

    A hypothesis is an assumption or suggested explanation about how two or more variables are related. It is a crucial step in the scientific method and, therefore, a vital aspect of all scientific research. There are no definitive guidelines for the production of new hypotheses. The history of science is filled with stories of scientists claiming ...

  9. Theory in Social Research

    Theory as a peg. In the context of social science, Gilbert ( 2005 ) defines research as a sociological understanding of connections—connections between action, experience, and change—and theory is the major vehicle for realizing these connections as is illustrated in Fig. 4.3. Theory- the major vehicle.

  10. 3.1.3: Developing Theories and Hypotheses

    Theories and Hypotheses. Before describing how to develop a hypothesis, it is important to distinguish between a theory and a hypothesis. A theory is a coherent explanation or interpretation of one or more phenomena. Although theories can take a variety of forms, one thing they have in common is that they go beyond the phenomena they explain by including variables, structures, processes ...

  11. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  12. Module 2: Research Methods in Social Psychology

    Module 2: Research Methods in Social Psychology. ... This systematic explanation of a phenomenon is a theory and our specific, testable prediction is the hypothesis. ... Describe observational research, listing its advantages and disadvantages. Describe case study research, listing its advantages and disadvantages. ...

  13. Organizing Your Social Sciences Research Paper

    The Practice of Social Research. 12th ed. Belmont, CA: Wadsworth Cengage, 2010; Brians, ... Describes the theoretical framework-- provide an outline of the theory or hypothesis underpinning your study. If necessary, define unfamiliar or complex terms, concepts, or ideas and provide the appropriate background information to place the research ...

  14. 13. Experimental design

    It is a useful design to minimize the effect of testing effects on our results. Solomon four group research design involves both of the above types of designs, using 2 pairs of control and experimental groups. One group receives both a pretest and a post-test, while the other receives only a post-test.

  15. Discovering Research Hypotheses in Social Science Using ...

    This is particularly in fields such as the social sciences, where automated support for scientific discovery is still widely unavailable and unimplemented. In this work, we introduce an automated system that supports social scientists in identifying new research hypotheses. With the idea that knowledge graphs help modeling domain-specific ...

  16. Social Science Research: Principles, Methods and Practices (Revised

    8. Sampling. Sampling is the statistical process of selecting a subset—called a 'sample'—of a population of interest for the purpose of making observations and statistical inferences about that population. Social science research is generally about inferring patterns of behaviours within specific populations. We cannot study entire ...

  17. (PDF) Significance of Hypothesis in Research

    rela onship between variables. When formula ng a hypothesis deduc ve. reasoning is u lized as it aims in tes ng a theory or rela onships. Finally, hypothesis helps in discussion of ndings and ...

  18. 2.2 Research Methods

    While field research often begins in a specific setting, the study's purpose is to observe specific behaviors in that setting. Field work is optimal for observing how people think and behave. It seeks to understand why they behave that way. However, researchers may struggle to narrow down cause and effect when there are so many variables floating around in a natural environment.

  19. Deductive Approach (Deductive Reasoning)

    A deductive approach is concerned with "developing a hypothesis (or hypotheses) based on existing theory, and then designing a research strategy to test the hypothesis" [1] It has been stated that "deductive means reasoning from the particular to the general. If a causal relationship or link seems to be implied by a particular theory or ...

  20. Social Surveys

    Social Surveys are a quantitative, positivist research method consisting of structured questionnaires and interviews. This post considers the theoretical, practical and ethical advantages and disadvantages of using social surveys in social research. The strengths and limitations below are mainly based around surveys administered as self-completion questionnaires.

  21. 6 Advantages of Hypothesis in Social Research

    There is not one but many advantages of hypothesis in social research. These are: 1. It is with the help of hypothesis, that it becomes easy to decide as to what type of data is to be collected and what type of data is simply to be ignored. ADVERTISEMENTS: 2. Hypothesis makes it clear as what is to be accepted, proved or disproved and that what ...

  22. Advantages and Disadvantages of Hypothesis

    The following are the advantages and disadvantages of Hypothesis: Advantages. Disadvantages. Guides research direction. Can limit creative thinking. Simplifies data interpretation. May lead to confirmation bias. Encourages critical thinking. Not always accurately predictive.

  23. 6 Advantages of Hypothesis in Social Research

    2. Hypothesis makes it clear as what is to be accepted, proved or disproved and that what is the main focus of study. 3. It helps the investigator in knowing the direction in which he is to move. Without hypothesis it will be just duping in the dark and not moving in the right direction. Image Source : cep-probation.org.