U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) Committee on Technological Innovation in Medicine; Gelijns AC, editor. Modern Methods of Clinical Investigation: Medical Innovation at the Crossroads: Volume I. Washington (DC): National Academies Press (US); 1990.

Cover of Modern Methods of Clinical Investigation

Modern Methods of Clinical Investigation: Medical Innovation at the Crossroads: Volume I.

  • Hardcopy Version at National Academies Press

8 Meta-Analysis: A Quantitative Approach to Research Integration *

STEPHEN B. THACKER

The goal of an integrative literature review is to summarize the accumulated knowledge concerning a field of interest and to highlight important issues that researchers have left unresolved ( 1 ). Traditionally, the medical literature has been integrated in the narrative form. An expert in a field will review studies, decide which are relevant, and highlight his or her findings, both in terms of results and, to a lesser degree, methodology. Topics for further research may also be proposed. Such narrative reviews have two basic weaknesses ( 2 , 3 ). First, no systematic approach is prescribed to obtain primary data or to integrate findings; rather, the subjective judgment of the reviewer is used. As a result, no explicit standards exist to assess the quality of a review. Second, the narrative reviewer does not synthesize data quantitatively across literature. Consequently, as the number of studies in any discipline increases, so does the probability that erroneous conclusions will be reached in a narrative review ( 4 ).

Scientific research is founded on integration and replication of results; with the possible exception of a new discovery, a single study rarely makes a dramatic contribution to the advancement of knowledge ( 5 ). In this article I summarize the constraints on reviewers of the medical literature and review alternative methods for synthesizing scientific studies. In particular, I examine meta-analysis, a quantitative method to combine data, and illustrate with a clinical example its application to the medical literature. Then, I describe the strengths and weakness of meta-analysis and approaches to its evaluation. Finally, I discuss current research issues related to meta-analysis and highlight future research directions.

  • CONSTRAINTS ON LITERATURE REVIEW

The limitations of any approach to literature review can be summarized as follows ( 6 ): (a) sampling bias due to reporting and publication policies; (b) the absence in published studies of specific data desired for review; (c) biased exclusion of studies by the investigator; (d) the uneven quality of the primary data; and (e) biased outcome interpretation. These concerns are applicable to any form of literature review.

Two types of bias in the published literature must concern a reviewer. First, because authors and journal editors tend to report statistically significant findings, a review limited to published studies will tend to overestimate the effect size. In a survey, for example, 58 investigators indicated that they had conducted 921 randomized controlled trials, and that 96 (21.3 percent) were unpublished. Positive randomized controlled trials were significantly more likely to be published than negative trials (77 percent versus 42 percent, P < .001) ( 7 ). At the same time, one should not uncritically assume that methods are better in published studies, as the quality of published papers varies dramatically ( 8 ). Second, another form of publication bias, the confirmatory bias, tends to emphasize and believe experiences that support one's views and to ignore or discredit those that do not. Results of a study of 75 journal reviewers asked to referee identical experimental procedures showed poor interrater agreement and a bias against results contrary to their theoretical perspective ( 9 ). Consequently, new or unpopular data tend also to be underreported in the published literature.

Data available from primary research studies may be inadequate for the literature reviewer. The reviewer is often confronted with selective reporting of primary findings, incorrect primary data analysis, and inadequate descriptions of original studies ( 10 ). In a study of psychotherapy outcomes, for example, an effect could not be calculated in 26 percent of studies because of missing data, a number comparable with previous reports ( 11 ).

In addition to identifying studies, the investigator must decide which reports to include in a review ( 3 ). One option is to use all available data and thereby maximize the representativeness of the conclusions. Using this approach, however, one will decrease the statistical validity of the data synthesis by including less rigorous studies. Exclusion of studies for methodological reasons, on the other hand, will increase the statistical validity but will decrease the size of the overall pool of data and may sacrifice the ability to generalize from the results.

Variable data quality is probably the most critical limitation for the reviewer. The effect of data quality was seen in a study of quality of life outcomes following coronary bypass graft surgery, when investigators found the estimates of benefit to be 15 percent less in randomized controlled trials than in trials using matching ( 12 ). Similarly, results of studies in medical care tend to show decreasing odds ratios with increased rigor of studies ( 8 ), although in one large study of psychotherapy, the effect was found to increase with increasing rigor ( 11 ). In quantitative reviews, statistical methods, including stratified analyses and multivariate methods, can be used to measure the impact on the results of varying quality in studies ( 8 , 13 , 14 ).

Although these constraints have been recognized previously, the more recent efforts to address concerns about research integration have stimulated new efforts to deal with them.

  • QUANTITATIVE APPROACHES TO SUMMARIZING ACROSS STUDIES

During the past several years, there have been several different approaches developed to summarize quantitatively data found in different studies of the same or similar research problems. The simplest approach to the quantitative integration of research is vote counting. With this approach, results of studies under consideration are classified into three categories: (a) statistically significant in one direction, (b) statistically significant in the opposite direction, or (c) no statistically significant difference. Then, the category receiving the most votes is judged to approximate truth ( 15 ). Although simple to use, voting methods do not take into account the magnitude of effect or sample size. In addition, this approach does not address the aforementioned problems inherent in traditional reviews, such as inadequate study methodology and uneven data quality.

In 1971, Light and Smith ( 15 ) proposed an alternative to voting methods that takes advantage of natural aggregations, or clusters, in the population. In this approach, one studies a problem in various clusters, such as neighborhoods or classrooms, and searches for explanations for differences among clusters. If these differences are explainable, the data can be combined and statistical variability can be described.

A third method for combining literature is pooling, a method by which data from multiple studies of a single topic, such as β-blockade after myocardial infarction, are combined in a single analysis ( 16 ). This method is limited by the availability of raw data; the variation in study methods, populations, and outcomes under study; and statistical considerations ( 17 , 18 ).

In a 1976 study of the efficacy of psychotherapy, Glass ( 19 ) coined the term meta-analysis, “the statistical analysis of a large collection of results from individual literature, for the purpose of integrating the findings.” Alternatively, meta-analysis can be defined as any systematic method that uses statistical analyses for combining data from independent studies to obtain a numerical estimate of the overall effect of a particular procedure or variable on a defined outcome ( 20 ).

While there have been several approaches to meta-analysis, the steps can be defined generally as (a) defining the problem and criteria for admission of studies, (b) locating research studies, (c) classifying and coding study characteristics, (d) quantitatively measuring study characteristics on a common scale, (e) aggregating study findings and relating findings to study characteristics (analysis and interpretation), and (f) reporting the results ( 21 , 22 ).

Problem formulation includes the explicit definition of both outcomes and potentially confounding variables. Carefully done, this step enables the investigator to focus on the relevant measures in the studies under consideration and to specify relevant methods to classify and code study characteristics.

The literature search includes a systematic approach to locating studies ( 1 ). First, one obtains information from the so-called invisible college, i.e., the informal exchange of information among colleagues in a particular discipline. Second, one searches indexes (e.g., Index Medicus and the Social Science Citation Index), abstracting services (e.g., International Pharmaceutical Abstracts), and computerized searches (e.g., MEDLINE and TOXLINE) to obtain research articles and sources of both published and unpublished data. Third, references in available studies identify further sources. The retrieval from academic, private, and government researchers of unreferenced reports, the so-called fugitive literature, as well as unpublished data, further minimizes selective reporting and publication biases.

Several methods are used to measure the results across studies ( 3 , 23 ). The most commonly used measure in the social sciences is the effect size, an index of both the direction and magnitude of the effect of a procedure under study ( 19 ). Glass and his colleagues ( 24 ) developed this method when assessing the efficacy of psychotherapy on the basis of data from controlled studies. One estimate of effect size for quantitative data is the difference between two group means divided by the control group SD: (X t − X c )/S c , where X t is the mean of the experimental or exposed group, X c is the mean of the control or unexposed group, and S c is the SD of the control group. Effect size expresses differences in SD units so that, for example, if a study has an effect size of 0.2 SD units, the overall effect size is half that of another study that has an effect size of 0.4 SD units. The appropriate measure of effect across literature will vary according both to the nature of the problem being assessed and to the availability of published data ( 7 , 25 ). Pooling of data from controlled clinical trials, for example, has been more widely used in the medical literature ( 16 , 26 ).

Effect size for proportions has been calculated in cohort literature as either a difference, P t − P c , or as a ratio, P t /P c ( 3 ). The latter has the advantage of considering the change relative to the control percentage and, in epidemiologic studies, is equivalent analytically to the concept of the risk ratio.

Whatever combination statistic is used, a systematic quantitative procedure to accumulate results across studies should include the following ( 27 ): (a) summary descriptive statistics across studies and the averaging of those statistics; (b) calculation of the variance of a statistic across studies (i.e., tests for heterogeneity); (c) correction of the variance by subtracting sampling error; (d) correction in the mean and variance for study artifacts other than sampling, such as measurement error; and (e) comparison of the corrected SD to the mean to assess the size of the potential variation across studies. A growing literature on statistical methods deals with problems in calculating effect size or significance testing as it relates to meta-analysis ( 28 , 29 ).

  • BENEFITS OF META-ANALYSIS

Meta-analysis forces systematic thought about methods, outcomes, categorizations, populations, and interventions as one accumulates evidence. In addition, it offers a mechanism for estimating the magnitude of effect in terms of a statistically significant effect size or pooled odds ratio. Furthermore, the combination of data from several studies increases generalizability and potentially increases statistical power, thus enabling one to assess more completely the impact of a procedure or variable ( 30 ). Quantitative measures across studies can also give insight into the nature of relationships among variables and provide a mechanism for detecting and exploring apparent contradictions in results. Finally, users of meta-analysis have expressed the hope that this systematic approach would be less subjective and would decrease investigator bias.

  • APPLICATIONS OF META-ANALYSIS IN HEALTH

Interest in clinical applications of meta-analysis has risen dramatically in recent years ( 31 , 32 ). An increasing number of attempts have been made to use meta-analysis outside of mental health or educational settings, including such other settings as chemotherapy in breast cancer ( 33 ), patient education interventions in clinical medicine ( 34 ), spinal manipulation ( 35 ), the effects of exercise on serum lipid levels ( 36 ), and duodenal ulcer therapy ( 37 ). There has also been discussion of the potential applications of meta-analysis to public health ( 38 ). An interesting application of meta-analysis was an effort to quantify the impact on survival and safety of a wide range of surgical and anesthetic innovations ( 39 ). More typical are efforts to draw conclusions from data pooled from a limited number of studies, usually controlled clinical trials ( 26 , 40 , 41 , 42 , 43 , 44 , 45 , 46 and 47 ). Pooling techniques have also been applied to data from non-randomized studies in attempts to address incompletely studied problems and to increase representativeness ( 25 , 48 , 49 ).

  • A CASE STUDY: ELECTRONIC FETAL MONITORING

In a 1979 review of the efficacy and safety of intrapartum electronic fetal monitoring, Banta and Thacker ( 50 ) set out to assess the evidence for the efficacy and safety of the routine use of electronic fetal monitoring. The independent variable was defined as the clinical application of all forms of electronic fetal monitoring to both high- and low-risk pregnant women; the outcomes measured were various measures of maternal and fetal morbidity and mortality, as well as the occurrence of cesarean delivery. Cost issues were also addressed.

A literature search began with the exchange of information with colleagues in obstetrics, pediatrics, epidemiology, technology assessment, and economics. References to published research articles were obtained from MEDLINE and Index Medicus and supplemented with references in articles under review. Efforts were also made to obtain unpublished reports and professional meeting abstracts. Although this review was systematic and extensive and comparable evidence from studies was sought, a quantitative analysis across studies was limited to descriptive statistics.

A 1987 meta-analysis of this same issue focused on evidence from randomized controlled trials and the previous literature search supplemented with information from the Oxford Data Base of Perinatal Trials and from direct correspondence with individual investigators ( 51 ). Variables were codified and, where possible, made comparable. For example, published measures of the Apgar score varied in timing (at 1, 2, and 5 minutes) and classification (abnormal was defined variably to include or exclude a score of 7); authors were asked to provide one-minute Apgar scores where a normal score included 7.

The primary data were then organized into descriptive tables that listed study results for specific outcomes, such as low Apgar score, perinatal mortality, and cesarean delivery, as well as for measures of diagnostic precision, such as sensitivity, specificity, and predictive value (see Table 8.1 ) ( 50 ). The findings of the randomized controlled trials were evaluated for comparability and then pooled (see Table 8.2 ), and the pooled analyses were stratified by data quality ( 51 ). The results of the pooled analyses were then reported, conclusions were drawn, and recommendations were made.

TABLE 8.1. Accuracy of electronic fetal monitoring using Apgar score as measure of outcome .

Accuracy of electronic fetal monitoring using Apgar score as measure of outcome .

TABLE 8.2. Pooled data from six controlled trials assessing efficacy of routine electronic fetal monitoring in labor.

Pooled data from six controlled trials assessing efficacy of routine electronic fetal monitoring in labor.

The 1979 study concluded that the data did not support the routine use of electronic fetal monitoring and recommended additional randomized controlled trials and limitation of electronic fetal monitoring to high-risk pregnancies ( 50 ). The 1987 report included randomized controlled trials already cited in the original study and three additional randomized controlled trials (seven randomized controlled trials from five countries). No known clinical trials were excluded from this report although the largest trial ( 52 ), which included more subjects than the other six in combination, was analyzed separately and compared with the pooled results of the others.

Analyses of different subsets of these studies based on differences in design (e.g., use of fetal scalp blood in sampling) and study quality found minor variations in results, but no changes in the basic findings. In both reports the pooled cesarean delivery rate was twofold higher in the group with electronic fetal monitoring. Data from the randomized controlled trial that scored highest in an assessment of the quality of study design and implementation, however, indicated that electronic fetal monitoring combined with fetal scalp blood sampling could be used to identify infants at risk of neonatal seizures ( 52 ). That study had been suggested by pooled analyses of earlier randomized controlled trials ( 53 ). While both of these reports illustrate the advantages of the systematic and comprehensive approach to a literature review, the meta-analytic methods used in the 1987 report illustrate both increased statistical power derived from data pooling and increased information found from stratification of studies. Subsequently available trials reported results consistent with that meta-analysis ( 54 , 55 ).

  • CRITICISMS OF META-ANALYSIS

When meta-analysis was introduced in the psychology literature, it did not meet with universal acceptance. It was variously described as “an exercise in mega-silliness” and “an abuse of research integration” ( 56 , 57 ). In addition to the constraints listed above related to literature review, the meta-analyst is confronted with additional challenges in an effort to synthesize data quantitatively across studies.

Statistical significance testing that is familiar to most clinicians is based on an assumption that data are selected randomly from a well-specified population. Non-random selection of studies and multiple tests of the same data, either through repeated publication of partial or entire data sets or through use of more than one outcome for each person, are two ways that this assumption is violated. Nevertheless, standard parametric statistics have been considered to be sufficiently robust to be usable in meta-analyses ( 58 ).

The current use of parametric statistical methods for meta-analysis requires additional theoretical study ( 29 ). Other methodological issues of concern to meta-analysts include bias ( 59 ), variability between studies ( 60 ), and the development of models to measure variability across studies ( 61 ). Additional statistical research should include study of the impact of outliers on the meta-analysis and the potential insight that they could provide into a research question ( 28 ). Statistically valid methods to combine data across studies of varying quality and design, including data from case-control studies, will enable metaanalysts to maximize the value of their data syntheses ( 48 ).

One serious concern about quantitative reviews of the literature is that although meta-analysis is more explicit, it may be no more objective than a narrative review ( 62 ). Both critics and advocates of meta-analysis are concerned that an unwarranted sense of scientific validity, rather than true understanding, may result from quantification ( 63 , 64 ). In other words, sophisticated statistics will not improve poor data but could lead to an unwarranted comfort with one's conclusions ( 65 ).

  • EVALUATION OF META-ANALYSIS

The evaluation of a literature review, like its conduct, should be systematic and quantitative. Evaluation criteria for meta-analysis include the need for the following: (a) clear identification of the problems under study; (b) active effort to include all available studies; (c) assessment of publication bias; (d) identification of data used; (e) selection and coding based on theoretical framework, not convenience; (f) detailed documentation of coding; (g) use of multiple raters to assess coding, including assessment of interrater reliability; (h) assessment of comparability of the cases, controls, and circumstances in the studies analyzed; (i) consideration of alternative explanations in the discussion; (j) relation of study characteristics to problems under review; (k) careful limitation of generalization to the domain of the literature review; ( 1 ) reporting in enough detail to enable replication by a reviewer; and (m) guidelines for future research ( 3 , 66 ).

Meta-analysis is an attempt to improve traditional methods of narrative review by systematically aggregating information and quantifying its impact. Meta-analysis was introduced to address the problem of synthesizing the large quantity of information on a particular subject, a problem that has been exacerbated by the large volume of published research in the past 20 years. It is viewed, however, only as a step in the process of developing better tools to quantify information across studies. It should neither be considered the final word in quantitative reviewing nor be dropped in haste because of the problems and criticisms discussed above. Certainly, benefits are to be obtained from systematic and rigorous review of available information, including increases in power and generalizability, better understanding of complex issues, identification of correlations among variables, and identification of gaps to be addressed by appropriate research.

When criticizing meta-analysis, one must distinguish between those problems that are inherent in any literature review and those that are specifically a problem with meta-analysis. For example, data quality, sampling bias, and data retrieval are limitations inherent in any literature review. Similarly, while outcome interpretation may be affected by the various styles of summarizing research findings, biases are not limited to the meta-analyst. On the other hand, one must be wary of inappropriate weight being given to a procedure just because it is quantitative, particularly when used by those who do not understand the limitations of the statistical methods utilized. Finally, critics should empirically test the impact of their criticisms so as to take meta-analysis or its alternative methods of quantitative summarization of research to the next level of usefulness.

It has been suggested that investigators should combine quantitative and qualitative review data to enable practitioners to apply results to individual patients or program problems ( 67 ). In this way, researchers can investigate issues that are important but difficult to quantify. Nonquantitative information, such as expert opinion and anecdotal evidence, does have a significant impact on policy. Finally, one must be concerned that although even the best metaanalysis may represent all available trials and relevant studies, it may not represent clinical practice because of the nature of how and where research is conducted ( 63 ).

Several things can be done to assess meta-analysis and to improve methods of quantitative review. First, one can compare the results of meta-analysis with those of narrative reviews to identify differences in interpretation and conclusions. In one study where a statistical procedure for summarizing research findings was compared with narrative reviews, it was found that the statistical reviewer was more likely to support the hypothesis both in direction and magnitude, although the basic recommendations did not differ between groups ( 68 ). A second important area of research is in statistical methodology. Both theoretical research into the assumptions of alternative methods and empirical research testing of the accuracy and efficiency of these methods need to be undertaken. Third, methods to assess the quality of meta-analysis need to be tested and refined ( 66 ). Finally, in assessing meta-analysis, one must be careful to limit the extrapolation of conclusions to the field of study covered by the literature review. Although this is true of any cumulative review, the boundaries of the review must be carefully delineated and interpretation confined to those boundaries.

In summary, the systematic, quantitative review and organization of the cumulative experience in a subject matter is fundamental to good scientific practice. Meta-analysis is a methodology that warrants testing and empirical evaluation. This is similarly true of alternative approaches to synthesizing information. The need to use available information optimally cannot be avoided by the rational scientist. The particular framework of review—be it meta-analysis or some other approach—should be addressed as an important scientific endeavor. The importance of addressing this issue must be underscored in an era where scientific information is increasing exponentially and the potential for application of these findings is unprecedented.

This paper was previously published in The Journal of the American Medical Association 1988;259:1685-1689.

  • Cite this Page Institute of Medicine (US) Committee on Technological Innovation in Medicine; Gelijns AC, editor. Modern Methods of Clinical Investigation: Medical Innovation at the Crossroads: Volume I. Washington (DC): National Academies Press (US); 1990. 8, Meta-Analysis: A Quantitative Approach to Research Integration.
  • PDF version of this title (2.3M)

In this Page

Related information.

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Meta-Analysis: A Quantitative Approach to Research Integration - Modern Methods ... Meta-Analysis: A Quantitative Approach to Research Integration - Modern Methods of Clinical Investigation

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

We will keep fighting for all libraries - stand with us!

Internet Archive Audio

meta analysis cumulating research findings across studies

  • This Just In
  • Grateful Dead
  • Old Time Radio
  • 78 RPMs and Cylinder Recordings
  • Audio Books & Poetry
  • Computers, Technology and Science
  • Music, Arts & Culture
  • News & Public Affairs
  • Spirituality & Religion
  • Radio News Archive

meta analysis cumulating research findings across studies

  • Flickr Commons
  • Occupy Wall Street Flickr
  • NASA Images
  • Solar System Collection
  • Ames Research Center

meta analysis cumulating research findings across studies

  • All Software
  • Old School Emulation
  • MS-DOS Games
  • Historical Software
  • Classic PC Games
  • Software Library
  • Kodi Archive and Support File
  • Vintage Software
  • CD-ROM Software
  • CD-ROM Software Library
  • Software Sites
  • Tucows Software Library
  • Shareware CD-ROMs
  • Software Capsules Compilation
  • CD-ROM Images
  • ZX Spectrum
  • DOOM Level CD

meta analysis cumulating research findings across studies

  • Smithsonian Libraries
  • FEDLINK (US)
  • Lincoln Collection
  • American Libraries
  • Canadian Libraries
  • Universal Library
  • Project Gutenberg
  • Children's Library
  • Biodiversity Heritage Library
  • Books by Language
  • Additional Collections

meta analysis cumulating research findings across studies

  • Prelinger Archives
  • Democracy Now!
  • Occupy Wall Street
  • TV NSA Clip Library
  • Animation & Cartoons
  • Arts & Music
  • Computers & Technology
  • Cultural & Academic Films
  • Ephemeral Films
  • Sports Videos
  • Videogame Videos
  • Youth Media

Search the history of over 866 billion web pages on the Internet.

Mobile Apps

  • Wayback Machine (iOS)
  • Wayback Machine (Android)

Browser Extensions

Archive-it subscription.

  • Explore the Collections
  • Build Collections

Save Page Now

Capture a web page as it appears now for use as a trusted citation in the future.

Please enter a valid web address

  • Donate Donate icon An illustration of a heart shape

Meta-analysis : cumulating research findings across studies

Bookreader item preview, share or embed this item, flag this item for.

  • Graphic Violence
  • Explicit Sexual Content
  • Hate Speech
  • Misinformation/Disinformation
  • Marketing/Phishing/Advertising
  • Misleading/Inaccurate/Missing Metadata

[WorldCat (this item)]

plus-circle Add Review comment Reviews

73 Previews

Better World Books

DOWNLOAD OPTIONS

No suitable files to display here.

EPUB and PDF access not available for this item.

IN COLLECTIONS

Uploaded by Tracey Gutierres on September 26, 2014

SIMILAR ITEMS (based on metadata)

  • Even more »

Account Options

meta analysis cumulating research findings across studies

  • Try the new Google Books
  • Advanced Book Search
  • Barnes&Noble.com
  • Books-A-Million
  • Find in a library
  • All sellers  »

meta analysis cumulating research findings across studies

Get Textbooks on Google Play

Rent and save from the world's largest eBookstore. Read, highlight, and take notes, across web, tablet, and phone.

Go to Google Play Now »

From inside the book

Other editions - view all, common terms and phrases, references to this book, bibliographic information.

Meta-Analysis: Cumulating Research Findings Across Studies

J. Hunter , F. Schmidt , G. Jackson

Oct 1, 1982

Influential Citations

Quality indicators

Journal name not available for this finding

Key Takeaway : Meta-analysis is a method for assessing and evaluating previous research on a subject, potentially leading to new conclusions and insights from the mass of already researched data.

Meta-analysis is a way of synthesizing previous research on a subject in order to assess what has already been learned, and even to derive new conclusions from the mass of already researched data. In the opinion of many social scientists, it offers hope for a truly cumulative social scientific knowledge.

James Lind Library Illustrating the development of fair tests of treatments in health care

  • About the Library

Hunter JE, Schmidt FL, Jackson GB (1982) Meta-analysis: cumulating research findings across studies. Beverly Hills, Ca: Sage Publications.

Title page(s), key passage(s), relevant topics.

  • Biases in systematic reviews
  • Using meta-analysis
  • Preparing and maintaining systematic reviews

Other articles

  • O’Rourke K (2006). A historical perspective on meta-analysis: dealing quantitatively with varying study results.
  • Individual patient data
  • Pre-clinical
  • Principles of Testing
  • Dramatic effects
  • Placebo effects
  • Treatment comparisons must be fair
  • Design bias
  • N-of-1 crossover
  • Cluster allocation
  • Crossover tests
  • Factorial design
  • Co-intervention bias
  • Double dummy
  • Analysis bias
  • Biases in judging unanticipated possible effects
  • Reporting bias
  • Researcher/sponsor bias and fraud
  • Recording and interpreting numbers
  • Quantifying uncertainty
  • Improving reports of research
  • Using the results of research

We welcome your suggestions for the design or content of the Library

(Stanford users can avoid this Captcha by logging in.)

  • Send to text email RefWorks EndNote printer

Meta-analysis : cumulating research findings across studies

Available online, at the library.

meta analysis cumulating research findings across studies

Green Library

meta analysis cumulating research findings across studies

SAL3 (off-campus storage)

More options.

  • Find it at other libraries via WorldCat
  • Contributors

Description

Creators/contributors, contents/summary, bibliographic information, browse related items.

Stanford University

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Stanford University , Stanford , California 94305 .

meta analysis cumulating research findings across studies

  • Science & Math

Kindle app logo image

Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required .

Read instantly on your browser with Kindle for Web.

Using your mobile phone camera - scan the code below and download the Kindle app.

QR code to download the Kindle App

Image Unavailable

Meta-Analysis: Cumulating Research Findings Across Studies

  • To view this video download Flash Player

Follow the author

John E. Hunter

Meta-Analysis: Cumulating Research Findings Across Studies Paperback – January 1, 1983

  • Language English
  • Publisher Sage
  • Publication date January 1, 1983
  • See all details

The Amazon Book Review

Product details

  • ASIN ‏ : ‎ B000KWW38U
  • Publisher ‏ : ‎ Sage (January 1, 1983)
  • Language ‏ : ‎ English

About the author

John e. hunter.

Discover more of the author’s books, see similar authors, read author blogs and more

Customer reviews

Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.

To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.

No customer reviews

  • Amazon Newsletter
  • About Amazon
  • Accessibility
  • Sustainability
  • Press Center
  • Investor Relations
  • Amazon Devices
  • Amazon Science
  • Sell on Amazon
  • Sell apps on Amazon
  • Supply to Amazon
  • Protect & Build Your Brand
  • Become an Affiliate
  • Become a Delivery Driver
  • Start a Package Delivery Business
  • Advertise Your Products
  • Self-Publish with Us
  • Become an Amazon Hub Partner
  • › See More Ways to Make Money
  • Amazon Visa
  • Amazon Store Card
  • Amazon Secured Card
  • Amazon Business Card
  • Shop with Points
  • Credit Card Marketplace
  • Reload Your Balance
  • Amazon Currency Converter
  • Your Account
  • Your Orders
  • Shipping Rates & Policies
  • Amazon Prime
  • Returns & Replacements
  • Manage Your Content and Devices
  • Recalls and Product Safety Alerts
  • Conditions of Use
  • Privacy Notice
  • Consumer Health Data Privacy Disclosure
  • Your Ads Privacy Choices

IMAGES

  1. Meta-Analysis: Cumulating Research Findings Across Studies (Studying

    meta analysis cumulating research findings across studies

  2. (PDF) Meta-Analysis: Cumulating Research Findings across Studies

    meta analysis cumulating research findings across studies

  3. What is a Meta-Analysis? The benefits and challenges

    meta analysis cumulating research findings across studies

  4. Meta-Analysis Methodology for Basic Research: A Practical Guide

    meta analysis cumulating research findings across studies

  5. Meta-analysis : cumulating research findings across studies : Hunter

    meta analysis cumulating research findings across studies

  6. PPT

    meta analysis cumulating research findings across studies

VIDEO

  1. Meta Analysis Research (मेटा विश्लेषण अनुसंधान) #ugcnet #ResearchMethodology #educationalbyarun

  2. 25 at 25: Individual participant data (IPD) meta-analysis

  3. 6-3 How to do a systematic review or a meta-analysis with HubMeta: Understanding Composite Scores

  4. 7-6 How to do a systematic review or a meta-analysis with HubMeta: Outlier Analysis

  5. 2-3 How to do a systematic review or a meta-analysis with HubMeta: Removing Duplicate Entries

  6. 1-4 How to do a systematic review or a meta-analysis with HubMeta: Understanding HubMeta's Dashboard

COMMENTS

  1. Meta-Analysis: Cumulating Research Findings Across Studies

    Meta-analysis integ rates the findi ngs. across such studies to re veal the simpler patte rns of relation s that underlie. research literatur es, thus providi ng a basis for theory development ...

  2. Meta-Analysis: Cumulating Research Findings across Studies

    Meta-analysis is a way of synthesizing previous research on a subject in order to assess what has already been learned, and even to derive new conclusions from the mass of already researched data ...

  3. Meta-Analysis: Cumulating Research Findings Across Studies Sage

    Meta-Analysis: Cumulating Research Findings Across Studies Sage Publications: Beverly Hills, 1982, 176 pp. ... Meta-analysis in social research 1981 Beverly Hills Sage Publications. Google Scholar. ... Procedures for resolving contradictions among different research studies. Harvard Educational Review 1971;41:429-471. Crossref. Google Scholar.

  4. Meta-Analysis: Cumulating Research Findings Across Studies

    Meta-Analysis: Cumulating Research Findings Across Studies. J. Hunter, F. Schmidt, G. Jackson. Published 1 October 1982. Sociology. Meta-analysis is a way of synthesizing previous research on a subject in order to assess what has already been learned, and even to derive new conclusions from the mass of already researched data.

  5. Meta-Analysis: Cumulating Research Findings Across Studies (Studying

    Meta-analysis is a way of synthesizing previous research on a subject in order to assess what has already been learned, and even to derive new conclusions from the mass of already researched data. In the opinion of many social scientists, it offers hope for a truly cumulative social scientific knowledge.

  6. Meta-Analysis: A Quantitative Approach to Research Integration

    Additional statistical research should include study of the impact of outliers on the meta-analysis and the potential insight that they could provide into a research question . Statistically valid methods to combine data across studies of varying quality and design, including data from case-control studies, will enable metaanalysts to maximize ...

  7. How to conduct a meta-analysis in eight steps: a practical guide

    2.1 Step 1: defining the research question. The first step in conducting a meta-analysis, as with any other empirical study, is the definition of the research question. Most importantly, the research question determines the realm of constructs to be considered or the type of interventions whose effects shall be analyzed.

  8. Meta-analysis : cumulating research findings across studies

    Meta-analysis : cumulating research findings across studies by Hunter, John E. (John Edward), 1939-; Schmidt, Frank L; Jackson, Gregg B; American Psychological Association. Division of Industrial-Organizational Psychology

  9. Meta-Analysis: A Constantly Evolving Research Integration Tool

    Abstract. During the past 30 years, meta-analysis has been an indispensable tool for revealing the hidden meaning of our research literatures. The four articles in this special section on meta-analysis illustrate some of the complexities entailed in meta-analysis methods. Although meta-analysis is a powerful tool for advancing cumulative ...

  10. Meta-Analysis: Cumulating Research Findings Across Studies

    This article presents a review of the book "Meta-Analysis: Cumulating Research Findings Across Studies," by John E. Hunter, Frank L. Schmidt, and Gregg B. Jackson.

  11. Meta-Analysis : Cumulating Research Findings Across Studies

    Meta-Analysis: Cumulating Research Findings Across Studies Volume 4 of Studying Organizations Volume 4 of Studying organizations : innovations in methodology: Authors: John Edward Hunter, Frank L. Schmidt, Gregg B. Jackson: Edition: reprint: Publisher: SAGE Publications, 1982: Original from: the University of Michigan: Digitized: Aug 23, 2007: ISBN

  12. [PDF] Meta-Analysis: Methods of Accumulating Results Across Research

    Meta-Analysis: Methods of Accumulating Results Across Research Domains. L. C. Lyons. Published 1999. Education. TLDR. The Hunter-Schmidt method of conducting a Meta-Analysis is described, which presents the formulas and procedures needed for converting study statistics to a common metric, calculating the sample weighted mean r and d, and ...

  13. Meta-Analysis: Cumulating Research Findings Across Studies

    Key takeaway: 'Meta-analysis is a method for assessing and evaluating previous research on a subject, potentially leading to new conclusions and insights from the mass of already researched data.' ... Meta-Analysis: Cumulating Research Findings Across Studies. J. Hunter, F. Schmidt, G. Jackson. Oct 1, 1982. Cite. Share. Citations. 46 ...

  14. Meta-Analysis

    A meta-analysis of the correlates of role conflict and ambiguity. Journal of Applied Psychology 68:320-333. Article Google Scholar. Ghiselli, E.E. (1949). The validity of commonly employed occupational tests. University of California Publications in Psychology 5:253-288. Google Scholar. Glass, G.V. (1976).

  15. Meta-Analysis: Cumulating Research Findings Across Studies Sage

    DOI: 10.3102/0013189X015008020 Corpus ID: 145733774; Meta-Analysis: Cumulating Research Findings Across Studies Sage Publications: Beverly Hills, 1982, 176 pp. @article{Hunter1986MetaAnalysisCR, title={Meta-Analysis: Cumulating Research Findings Across Studies Sage Publications: Beverly Hills, 1982, 176 pp.}, author={John Edward Hunter and Frank L. Schmidt and Gregg B. Jackson}, journal ...

  16. Research Perspectives on Meta‐Analysis

    Summary This chapter contains sections titled: Overview of Meta-Analysis Meta-Analysis of Effect Sizes Meta-Analysis of Correlations Technical Issues in Meta-Analysis Final Remarks References ... Research Perspectives on Meta-Analysis. Allen I. Huffcutt, Allen I. Huffcutt. Search for more papers by this author. Allen I. Huffcutt, Allen I. Huffcutt.

  17. Assessing moderating effect in meta-analysis: a re-analysis ...

    Meta-analysis has been increasingly used as a knowledge cumulation tool by IS researchers. In recent years many meta-analysts have conducted moderator analyses in an attempt to develop and test theories. These studies suffer from several methodological problems and, as a result, may have contributed to rather than resolved inconsistent research findings. For example, a previous meta-analysis ...

  18. Meta-analysis.

    meta-analysis methods for quantitatively cumulating findings across research studies advanced by Glass and his associates and by Schmidt, Hunter, and their associates are reviewed / these meta-analytic methods are shown to be similar in many respects and complementary / the major difference is shown to be the absence from Glassian meta-analysis of procedures for correcting between-study ...

  19. Meta-Analysis: Cumulating Research Findings Across Studies Sage

    or even the most accurate test. Direct tests based on the ¿-statistics would seem considerably simpler. A related problem with the book concerns its failure to ex­ plicitly state the null and alternative hypotheses of tests of the statistical significance of combined results. Although it is true that a series of p-values such as .10, .09, .11, .12, .08, say, can have a combined p-value of ...

  20. Hunter JE, Schmidt FL, Jackson GB (1982)

    Hunter JE, Schmidt FL, Jackson GB (1982) Meta-analysis: cumulating research findings across studies. Beverly Hills, Ca: Sage Publications.

  21. Meta-analysis : cumulating research findings across studies

    all catalog, articles, website, & more in one search catalog books, media & more in the Stanford Libraries' collections articles+ journal articles & other e-resources

  22. Meta-Analysis: Cumulating Research Findings Across Studies: John E

    Buy Meta-Analysis: Cumulating Research Findings Across Studies on Amazon.com FREE SHIPPING on qualified orders Meta-Analysis: Cumulating Research Findings Across Studies: John E. Frank L. Schmidt and Gregg B. Jackson Hunter: Amazon.com: Books

  23. A meta-analysis of the causal interpretation of enterprise green

    This research leverages Giddens' structuration theory and employs meta-analytical methods to elucidate the determinants and effects of corporate green innovation. Initially, the study synthesizes 288 effect sizes from 161 distinct scholarly articles, spanning from 2012 to early 2023, guided by the structuration framework.