- Search Menu
- Advance articles
- Editor's Choice
- Author Guidelines
- Submission Site
- Open Access
- About The British Journal of Social Work
- About the British Association of Social Workers
- Editorial Board
- Advertising and Corporate Services
- Journals Career Network
- Self-Archiving Policy
- Dispatch Dates
- Journals on Oxford Academic
- Books on Oxford Academic
- < Previous
Quantitative Research Methods for Social Work: Making Social Work Count, Barbra Teater, John Devaney, Donald Forester, Jonathan Scourfield and John Carpenter
- Article contents
- Figures & tables
- Supplementary Data
Hugh McLaughlin, Quantitative Research Methods for Social Work: Making Social Work Count, Barbra Teater, John Devaney, Donald Forester, Jonathan Scourfield and John Carpenter, The British Journal of Social Work , Volume 52, Issue 3, April 2022, Pages 1793–1795, https://doi.org/10.1093/bjsw/bcaa116
- Permissions Icon Permissions
I remember sharing a lift with a Professor from America at a joint IFSW/IASSW World Social Work Conference who taught research methods. After chatting about teaching research methods, he informed me gleefully that his students are taught qualitative methods first. However, after they get to him, none of them leave his classroom without being ‘converted to quantitative methods’! At this point, he got off the lift leaving our discussion in the air.
This book arose from funding from the Economic and Social Research Council to address the quantitative skills gap in the social sciences. The grants were applied for under the auspices of the Joint University Council Social Work Education Committee to upskill social work academics and develop a curriculum resource with teaching aids. I was saddened to discover that many of the free resources are no longer available and wondered if anything could be done to remedy this.
The book is unusual for the UK in that its major focus is on quantitative methods unlike other social work research methods books which tend to cover both qualitative and quantitative methods ( Campbell et al. , 2017 ; Smith, 2009 ). Until this book came along many of us will have been happy using non-social work research methods texts to learn about quantitative methods ( Bryman, 2015 ). This authoritative text offers a fresh and imaginative approach to teaching quantitative methods. It is set out in an incremental and easily accessible format with a series of exercises and critical thinking boxes with suggested readings at the end of each chapter. The exercises and critical thinking questions are well thought out with a full answer to each at the back of the book—thus making it really useful for those of us who teach research methods. It is also aimed at social work academics, social work students and practitioners who want to learn more about quantitative approaches, where they are useful, how they can be read and understood, and how they can be applied to their setting.
The book helpfully follows a small number of exemplars, ‘research in practice’ from easily accessed research publications covering both adults and children’s services. These ‘research in practice’ examples are then developed throughout the book to show how they can be understood in greater detail when using different quantitative approaches.
The book begins by providing a rationale for quantitative approaches discussing the importance of understanding numerical concepts and data whilst acknowledging the social construction of statistics. One of the features of the book is its awareness of the limitations of quantitative research and what it can and cannot do. Each chapter builds on the last chapter with the first four chapters setting the groundwork by exploring key concepts, their application to social work and leading up to the introduction of quantitative research techniques.
The book then moves move from discussing how to use numbers to ‘describe a sample’ to ‘making a decision with confidence’ onto examining whether two or more variables are related. Each chapter builds on the previous one becoming increasingly complex and requiring the reader to concentrate more and more using what that they had learned in previous chapters.
The book does not duck the ethical challenges of quantitative work challenging the view that random control trials are inherently unethical whilst identifying a number of ethical conundrums to make the reader think. If I have a criticism of the book, it is that I would have liked to have seen a section on how you can work with service users to inform quantitative research methods.
The final chapter is where I see most hope for social work in which it considers how the use of mixed quantitative and qualitative methods can best capture the complex and paradoxical nature of social work practice and policy. Returning to my American professor I am not converted, and still prefer qualitative methods, but willingly accept the need to be more quantitative competent. I thus see this excellent book and as an essential tool for every social work academic, student or practitioner’s methodological toolbox. It makes an important contribution to social work for those who want to learn more about how to read, learn and conduct social work research. I agree totally with the authors' (2017, p. 7) claim that:
As a profession we need to become more confident and competent in working with quantitative data in order to be able to answer a fuller range of research questions, as well as to be able to draw upon a wider body of research to inform policy and practice decisions.
Bryman A. ( 2015 ) Social Research Methods , 5th edn., Oxford , Oxford University Press .
Campbell A. , Taylor B. , McGlade A. ( 2017 ) Research Design in Social Work: Qualitative and Quantitative Methods , London , Sage .
Smith R. ( 2009 ) Doing Social Work Research , Glasgow , McGraw-Hill .
Citing articles via.
- Recommend to your Library
- Online ISSN 1468-263X
- Print ISSN 0045-3102
- Copyright © 2024 British Association of Social Workers
- About Oxford Academic
- Publish journals with us
- University press partners
- What we publish
- New features
- Open access
- Institutional account management
- Rights and permissions
- Get help with access
- Media enquiries
- Oxford University Press
- Oxford Languages
- University of Oxford
Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide
- Copyright © 2023 Oxford University Press
- Cookie settings
- Legal notice
This Feature Is Available To Subscribers Only
Sign In or Create an Account
This PDF is available to Subscribers Only
For full access to this pdf, sign in to an existing account, or purchase an annual subscription.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Account settings
- Advanced Search
- Journal List
- HHS Author Manuscripts
Causality and Causal Inference in Social Work: Quantitative and Qualitative Perspectives
Lawrence a. palinkas.
1 School of Social Work, University of Southern California, Los Angeles, CA, USA
Achieving the goals of social work requires matching a specific solution to a specific problem. Understanding why the problem exists and why the solution should work requires a consideration of cause and effect. However, it is unclear whether it is desirable for social workers to identify cause and effect, whether it is possible for social workers to identify cause and effect, and, if so, what is the best means for doing so. These questions are central to determining the possibility of developing a science of social work and how we go about doing it. This article has four aims: (1) provide an overview of the nature of causality; (2) examine how causality is treated in social work research and practice; (3) highlight the role of quantitative and qualitative methods in the search for causality; and (4) demonstrate how both methods can be employed to support a “science” of social work.
In defining the mission of the profession of social work to enhance human well-being and help meet the basic needs of all people, the Preamble of the National Association of Social Workers Code of Ethics (2013) places great emphasis on the environmental forces that create, contribute to, and address problems in living. Implied in this emphasis is the assumption of a causal link between these environmental forces and the problems they create or contribute to. For instance, when faced with the challenge of providing care to a client with a depressive disorder, we first attempt to identify the factors that contributed to the onset of the disorder. Furthermore, to address these problems, we must appropriately and effectively match a specific solution to a specific problem. This, too, requires us to consider a causal link between the solution and its outcome (elimination of the problem or mitigation and treatment of its impacts). Thus, a client with a depressive disorder may benefit from treatment that addresses the symptoms, which may involve pharmacotherapy and/or psychotherapy.
However, the complexity of the issues we face as social workers forces us to consider whether it is desirable, much less even possible, to identify cause and effect, and if so, what is the best means for doing so. The issue of desirability has been raised in conjunction with criticism of the value of the scientific method in general and scientifically based evidencebased practice in particular ( Heineman, 1981 ; Karger, 1983 ; Otto & Ziegler, 2008 ; Tyson, 1995 ). The issue of feasibility has been raised in conjunction with the claim that the complexity of social phenomena renders the use of scientific methods as problematic and incomplete ( Otto & Ziegler, 2008 ; Rosen, 2003 ). These questions are by no means limited to social work, but they are central to our consideration of whether it is possible to develop a science of social work and, if so, how we go about doing it.
This article has four aims: (1) provide an overview of the nature of causality and causal inference; (2) examine how causality and causal inference are treated in social work research and practice; (3) highlight the role of quantitative and qualitative methods in the search for causality; and (4) demonstrate how both methods can be employed to support a “science” of social work.
The Nature of Causality and Causal Inference
The human sciences, including social work, place great emphasis on understanding the causes and effects of human behavior, yet there is a lack of consensus as to how cause and effect can and should be linked ( Parascandola & Weed, 2001 ; Salmon, 1998 ; Susser, 1973 ). What little consensus exists seems to be that effects are assumed to be consequences of causes. Causes and effects may be singular in nature (e.g., cigarette smoking causes cancer) or they may be multifactoral (e.g., cancer is caused by genetic predisposition, certain health behaviors like cigarette smoking and diet, and exposure to environmental hazards like toxic chemicals; cigarette smoking causes cancer, hypertension, diabetes, and emphysema). This relationship can be viewed both spatially and temporally. For instance, the presence of a depressive disorder in an individual may have some determinants that are distal (genetic predisposition, childhood experience) and some determinants that are proximal (e.g., recent life events like loss of employment, death of spouse) to the current episode of depressive symptoms. A link between one or more causes and one or more effects may also be viewed as direct and indirect (mediating, moderating; Koeske, 1993 ; Kramer, 1988 ; Susser, 1973 ). Thus, while the death of a spouse may contribute to the onset of a depressive disorder, it may do so directly or indirectly by virtue that it deprives the survivor of an important source of social support. Likewise, the death of a spouse may contribute differentially to the risk of a depressive disorder depending on whether the survivor is a male or female. Causal inference, in turn, may be viewed as the process of establishing the link between the perceived cause or causes and the perceived effect or effects.
Causation may also be viewed from the perspective of the distinction between necessary and sufficient causes. For instance, “a given exposure is considered a necessary cause of an outcome if the outcome does not occur in its absence. It is a sufficient cause if it always (i.e., in all individuals) leads to an outcome without requiring the presence or absence of any other factors” ( Kramer, 1988 , p. 256). However, causes may also be multifactorial, in which case causes are neither necessary nor sufficient for any given individual. The necessary and sufficient cause definitions assume that all causes are deterministic, while a probabilistic view of causation is one in which a cause increases the probability or chance that its effect will occur but may be neither necessary nor sufficient for its occurrence ( Parascandola & Weed, 2001 ). Kramer (1988) and others ( Kleinbaum, Kupper, & Morgenstern, 1982 ; Parascandola & Weed, 2001 ) argue that a probabilistic definition of causation is more consistent with the aims of applied human sciences like public health.
Our current notions of causation and causal inference generally owe their intellectual origins to the British social philosopher David Hume (1738/1975) . Hume's criteria of causation emphasize the importance of a temporal priority in which causes must necessarily occur or exist prior to the occurrence or existence of an effect (e.g., the cause and effect must be contiguous in space and time, the cause must be prior to the effect, and the relationship between cause and effect must be constant). Hume's criteria also stresses the one-to-one relationship between cause and effect (e.g., the same cause always produces the same effect, and the same effect only occurs in the presence of the same cause; where several different objects produce the same effect, it must be the result of some characteristic the causes have in common. However, Hume's criteria do not specify the tools used to describe that relationship—in other words, it does not provide any guidance on the methods used to determine if a relationship exists between two variables or phenomena and if the nature of that relationship is causal, correlational, or coincidental. A more contemporary version of these criteria was developed by the British biostatistician Austin Bradford Hill that is widely used in the field of public health ( Hill, 1965 ; see Table 1 ). Like Hume, these criteria give priority to the temporal relationship between a cause and effect (i.e., the first must precede the second) and to specificity (i.e., a single cause produces a specific effect), but also suggest the importance of measurement or quantification of the relationship (i.e., strength of association and existence of a doseresponse relationship) and experimental designs (They also suggest that support for the causal inference requires confirmation using other types of information or knowledge (i.e., consistency, plausibility, and coherence).
Hill's Criteria of Causation.
Source: Hill (1965) .
Causality in Social Work Research and Practice
Lewis (1975) argues that causal inference is an essential part of social work practice as well as social work research. However, the association between causality and causal inference in the field of social work and logical positivism and critical rationalism with its emphasis on universal laws has subjected the search for causal linkages to criticism from those who view it as deterministic, limited in its ability to address the complexity of social phenomena, and inconsistent with the goals of the profession ( Otto & Ziegler, 2008 ). As Padgett (2008 , p. 168) observes, “anti-positivistic skeptics question whether the search for causation is plausible or desirable, given the postmodern premise that facts are ‘fictitious’ ( Lofland & Lofland, 1995 ).” Nevertheless, embedded in much of social work research is an implicit understanding that actions have consequences and that most of the characteristics of the human condition can be linked directly or indirectly to one or more factors or events that are in some way responsible for that condition.
In social work research, randomized controlled trials (RCTs) have been used primarily to demonstrate causal linkages between specific interventions that are treated as independent variables and specific outcomes that are treated as dependent variables. For instance, Ell and colleagues (2010) assessed the effectiveness of an evidence-based, socioculturally adapted, collaborative depression care intervention for treatment of depression and diabetes in a group of 387 predominately Hispanic primary care patients recruited from two safety net clinics. The causal chain tested in this study was that the intervention (which included problem-solving therapy and/or antidepressant medication based on a stepped-care algorithm; first-line treatment choice; telephone treatment response, adherence, and relapse prevention follow-up over 12 months; plus systems navigation assistance) resulted in an improvement in mood (or a reduction in depressive symptoms), which, in turn, resulted in improvement in Hemoglobin A1C levels. In this instance, improvement in H1C levels was a direct effect of the reduction in depressive symptoms and an indirect effect of the depression treatment intervention. In another example, Glisson and colleagues (2010) conducted a RCT of the effectiveness of Multisystemic Therapy (MST) and the Availability, Responsiveness, and Continuity (ARC) organizational intervention in reducing problem behavior in delinquent youth residing in 14 rural counties in Tennessee, using a 2 × 2 design in which youth were randomized into receiving MST or treatment as usual, and counties were randomized into receiving the ARC intervention. A multilevel mixed effects regression analysis of 6-month treatment outcomes found that total youth problem behavior in the MST plus ARC condition was at a nonclinical level and significantly lower than in other conditions. The causal chain tested in this study was that the ARC intervention resulted in the successful implementation of MST, which, in turn, resulted in a reduction of youth problem behavior. In this instance, reduction of youth problem behaviors was a direct effect of the MST intervention and an indirect effect of the ARC organizational intervention.
However, qualitative methods have also been used in social work research to make causal inferences linking two sets of phenomena. For instance, Gutierrez, GlenMaye, and DeLois (1995) conducted interviews with administrators and staff at six different agencies to identify elements of the organizational context of empowerment practice. Using a modified grounded theory approach, they identified four sets of factors (funding sources, social environment, intrapersonal issues, and interpersonal issues) that constitute barriers to maintaining and implementing an empowerment-based approach in social work practice. For instance, “differing philosophies or politics of more traditional service providers (cause) negatively affected the willingness or ability of empowerment-based agencies to refer clients to other services (effect)” Gutierrez, GlenMaye, & DeLois, 1995 , p. 252, parentheses added). Alaggia and Millington (2008) conducted a phenomenological analysis of the lived experience of 14 men who were sexually abused in childhood to “generate knowledge … on the effects of boyhood sexual abuse on the present lives of men, and to understand how those effects found expression in men's everyday lives” (p. 267). In this instance, sexual abuse during childhood is treated as the cause and anger and rage, sexual disturbance and ambivalence, and loss and hope were identified as effects. The attempt to examine effects of childhood sexual abuse using a phenomenological approach is especially noteworthy because the focus on interpretative understanding or verstehen is often seen as a rejection of causal understanding (cf. Otto & Ziegler, 2008 ).
Qualitative and Quantitative Perspectives on Causality
Although these two studies are representative of the use of different qualitative methodological approaches to identify connections between certain phenomena and certain outcome, in social work as in other fields, priority in the determination of causality is given to quantitative methods in general and RCTs in particular. Otto and Ziegler (2008) note that RCTs are considered the best form of evidence of practice effectiveness ( McNeece & Thyer, 2004 ) and, therefore, of causality. “These designs serve to control or cancel out and differences that are effects of other Events (Z) to assess whether Event X (cause)—as independent variable—is nonspuriously conjunct with Event Y (effect) in the context of a controlled ceteris paribus condition” ( Otto & Ziegler, 2008 , p. 275). They further argue that the criteria of using the RCT design to determine causal connections between an intervention and its outcomes can hardly be applied to qualitative research such as ethnographic studies or deep hermeneutical interviews ( Otto & Ziegler, 2008 , p. 275). Consequently, qualitative studies are placed on a lower rank of evidence of causality ( McNeece & Thyer, 2004 ), and below what Cook and Campbell (1979) considered as the minimum interpretable design necessary and adequate for drawing valid conclusions about the effectiveness of treatments ( Otto & Ziegler, 2008 , p. 275).
However, there are inherent limitations to relying on RCTs to determine causality in social work research. Circumstances may preclude the use of the RCT design, including small sample sizes, especially in multilevel studies where single individuals are embedded in organizations like schools or agencies; concerns about external validity; the ethics of providing service to one group and denying the same service to another group of clients; the expense and logistics involved in conducting such research; the unwillingness of participants or organizations to accept randomization; and the expense and logistical challenges in conducting longitudinal follow-up assessments ( Glasgow, Magid, Beck, Ritzwoller, & Estabrooks, 2005 ; Landsverk, Brown, Chamberlain, Palinkas, & Horwitz, 2012 ; Palinkas & Soydan, 2012 ).
Furthermore, causal models can be constructed using quantitative or qualitative data. In the example presented in Figure 1 , the model of social capital effects on psychosocial adjustment of Chinese migrant children was developed by Wu, Palinkas, and He (2010) using structural equation modeling. On the other hand, using qualitative data collected from leaders of county-level child welfare, mental health and juvenile justice systems in California, Palinkas and colleagues (2014) also developed a model of interorganizational collaboration that posited causal linkages between characteristics of the outer context (availability of funding, legislative mandates, size of jurisdiction, and extent of responsibility for same client population), inner context (characteristics of the participating organizations and individual members of those organizations), and characteristics of the collaboration itself (focus on a single vs. multiple initiatives, formality, frequency of interaction) and the structure of social networks that, in turn, are linked to the pace and progress of implementation of evidence-based practices (see Figure 2 ).
Standardized solutions for the structural model of social capital effects on the psychosocial adjustment of Chinese migrant children. Source: Wu, Palinkas, and Xe (2010) .
Heuristic model of interorganizational collaboration for implementation of evidence-based practices. Source: Palinkas et al. (2014) .
Finally, not all qualitative methodologists have rejected the notion that the construction of causal inferences is both desirable and possible. Miles and Huberman (1994 , p. 4), for instance, “aim to account for events, rather than simply to document their sequence. We look for an individual or a social process, a mechanism, a structure at the core of events that can be captured to provide a causal description of the forces at work” (italics in original). Sayer (2000) argues that causal explanation is not only legitimate in qualitative research, but a particular strength of this approach, although it uses a different strategy from quantitative research, based on a process rather than a variance concept of causality. Ragin's (1987) qualitative comparative analysis involves representing each case as a combination of causes and effects that can then be compared with each other. Another qualitative comparative method, analytic induction, is described as an “exhaustive examination of cases in order to prove universal, causal generalizations” ( Vidich & Lyman, 2000 , p. 57). Denzin (1978) considered analytic induction to be one of three major strategies for establishing the existence of a causal relationship, the other two being the statistical method and the experimental method. Even Lofland (1971) , considered a skeptic of the search for causation, argued that the strong suit of the qualitative researcher is the ability to provide order, rich descriptive detail, stating that “it is perfectly appropriate that one be curious about causes, so long as one recognizes that whatever account or explanation he develops is conjecture” (p. 62).
It would seem, therefore, that quantitative and qualitative methods each present certain advantages and disadvantages in making causal inferences whether one identifies with a logical positivist or postpositivist or a postmodernist, social constructivist view of human nature, or is more at ease with the process of counting quantitative data or interpreting qualitative data. However, as no single method is adequate to the challenge of linking cause and effect in a deterministic or probabilistic fashion, it is perhaps prudent to heed the advice of Campbell (1999) , who maintained that because proving causality with certainty in explaining social phenomena is problematic and because all methods for proving causality are imperfect, multiple methods, both quantitative and qualitative, are needed to generate and test theory, improve understanding over time of how the world operates, and support informed policy making and social program decision making.
Causality and the Science of Social Work
The path to causality can be viewed as moving across a series of steps that begin with identification and proceed to description, explanation generation, explanation testing, and prescription or control. Identification first occurs through reports or studies that point to the existence of a previously unknown or unrecognized phenomenon. Description of the phenomenon may involve qualitative (narratives, case studies) and/or quantitative (frequencies, percentages) data. Both methodological approaches may be employed in the next step, which is the identification of associations between variables and the generation of hypotheses to be tested that can help to explain why the variables are in association with one another. The next step is then to test the hypotheses and the validity of the presumed explanation. This step usually requires the use of prospective longitudinal designs and the use of quantitative methods. The final step is the construction of experimental conditions that enable the investigator to simultaneously control for the possibility of alternate explanations for the observed association between one variable presumed to be the cause and the other variable or variables presumed to be the effect. This step usually requires the use of the RCT design and the use of quantitative methods.
One can conceive of two separate arguments that link these discrete steps in a meaningful way. In the first argument, the further we proceed along the path of scientific inquiry, the more we rely on quantitative methods to make causal inferences and support the existence of a causal link/relationship. However, as noted previously, there are inherent limitations to relying on RCTs to determine causality in social work research. In the second argument, qualitative and quantitative methods each make distinct contributions to the task of proving causality. Thus, in using quantitative methods, priority is placed on confirmation of hypothesis through experimentation and a narrow or segmented focus on potential causal explanations, while in using qualitative methods, priority is placed on exploration of phenomenon and generation of hypotheses through observation and a broad or holistic focus on the social context in which causal links occur.
Although they may differ with respect to the value placed on each set of methods (with the quantitative methods being considered dominant in the first argument and coequal with qualitative methods in the second argument), both arguments posit a relationship between qualitative and quantitative methods and both assume that each set of methods has a role to play in understanding causality and in making causal inferences. Relationships between the two sets of methods have been increasingly articulated using the terminology of mixed methods, defined as the integrated use of quantitative and qualitative methods in ways that provide greater understanding or insight into a phenomenon that might be obtainable from either method used alone ( Palinkas, Horwitz, Chamberlain, Hurlburt, & Landsverk, 2011 ). Cresswell and Plano Clark (2011) identify five different types of mixed methods designs. A Triangulation design is used when there is a need to compare results from different sources of information regarding the hypothesized same phenomenon or parameter to seek corroboration. An Explanatory or complementary design is used to understand a phenomenon more comprehensively or completely. An Exploratory design is used for instrument, taxonomy, or typology development , where qualitative data serve as an initial exploration to identify variables, constructs, taxonomies, or instruments for a subsequent quantitative study phase. An Embedded or Expansion design is used to assess hypothesized different phenomena or parameters using different methods. Finally, an Initiation or Transformative design is used to understand a phenomenon more insightfully, discovering new ideas, perspectives, and meanings. Each of these designs may be used to identify, describe, explain, verify, and control the relationships linking one phenomenon or set of phenomena to another phenomenon or set of phenomena in a causal fashion. This combined use of quantitative and qualitative methods may occur simultaneously, in which one method usually drives the project theoretically with the supplemental project designed to elicit information that the base method cannot achieve or for the results to inform in greater detail about one part of the dominant project, or sequentially, in which the method that theoretically drives the project is used first, with the second method designed to resolve problems/issues uncovered by the first study or to provide a logical extension from the findings of the first study.
An illustration of the use of mixed method designs to examine causality and causal inference can be found in the Child STEPS Effectiveness Trial (CSET), carried out by the Research Network on Youth Mental Health and funded by the John D. and Catherine T. MacArthur Foundation ( Chorpita et al., 2013 ; Weisz et al., 2012 ). The CSET focused on children aged 8–13 who had been referred for treatment of problems involving disruptive conduct, depression, anxiety, or any combination of these. Ten clinical service organizations in Honolulu and Boston, 84 therapists, and 174 youths participated in the project. Youth participants were treated with the usual treatment procedures in their settings or with one or more of three selected evidence-based treatments (EBTs): cognitive-behavioral therapy (CBT) for anxiety, CBT for depression, and behavioral parent training (BPT) for conduct problems. These evidence-based treatments were tested in two forms: standard manual treatment (standard), using full treatment manuals; and modular treatment (modular) in which therapists learn all the component practices of the evidence-based treatments but individualize the use of the components for each child, guided by a clinical algorithm and measurement feedback on practices and clinical progress. A cluster randomization design was employed with therapists assigned to one of three conditions (usual care, standard, and modular) and youth who met study criteria randomized to treatment delivered by one of these three groups of therapists.
Mixed effects regression analyses showed significantly superior outcome trajectories for modular treatment (cause) relative to usual care on weekly measures of a standardized Brief Problem Checklist and a patient-generated Top Problems Assessment (effect), and youths receiving modular treatment had significantly fewer diagnoses than usual care youths at posttreatment ( Chorpita et al., 2013 ; Weisz et al., 2012 ). In contrast, none of these outcomes showed significant differences between standard treatment and usual care. Follow-up tests also showed significantly better outcomes for modular treatment than standard treatment on the weekly trajectory measures. In general, the modular approach outperformed usual care and the standard approach on the clinical outcome measures, and the standard approach did not outperform usual care.
Although the use of the modular approach to evidence-based treatment was assumed to have caused an improvement in behavioral health outcomes in this population, the quantitative data alone could not explain why the modular approach was more successful than the standard approach. To address that question, a qualitative study of the process of EBT dissemination and implementation was embedded in the RCT. Semistructured interviews and focus groups were conducted with included 38 therapists, six project supervisors, and eight clinical organization directors or senior administrators to identify patterns of use of the EBTs once the randomized trial had been concluded. Twenty-six of the 28 therapists (93%) who had been assigned to the standard or the modular conditions reported using the techniques with nonstudy cases subsequent to the conclusion of the trial. However, the pattern of use among all therapists, including those in the standard manualized condition, was more consistent with the modular approach. While all of the therapists in these two conditions thought the EBTs were helpful, what distinguished the two groups of therapists was the perception that the modular approach (cause) allowed for more flexibility, accommodation, and control over the therapeutic alliance with clients (effects) than the standard approach. Both therapists and supervisors felt that the modular approach gave them more “license” to negotiate with researchers with respect to circumstances in which the modules could themselves be modified or, more often than not supplemented with additional materials and techniques acquired through experience with working with similar clients ( Palinkas et al., 2013 ).
We began by asking three questions. The first was whether it is desirable for social workers to identify cause and effect. It is desirable if we believe social work to be an applied, empirically grounded social and cultural science aiming at both causal explanation and interpretative understanding ( Otto & Ziegler, 2008 , p. 273), one that includes elements of logical positivism and postmodernist social constructivism. It is also desirable if the foundation of our profession is to change the lives of our clients for the better. As Kramer (1988 , p. 255) makes a similar argument for examining causality in public health, stating that “an understanding of cause is essential for change … A deliberate intervention (change in exposure) will be successful in altering outcome only to the extent that the exposure is a true cause of that outcome.” Alternatively, we might question whether it is possible to develop and implement a solution without a comprehensive understanding of the problem one is trying to solve (Can we achieve y without understanding x ?). To answer that question, we would have to determine whether that understanding can be comprehensive without understanding the cause of a problem (Is an understanding of x necessary to produce y ?). Further, even if the solution mitigated the consequences of the problem (e.g., reducing symptoms of depression or anxiety), is it truly an effective solution if the cause remains unaddressed (Can we produce y without changing x )?
The second question we addressed was whether it is possible for social workers to determine causality. Social workers face inherent challenges in adopting exclusively positivist criteria for determining causality. Making connections between a cause and an effect is possible whether one adheres to a positivist or a social constructivist view of society and behavior. If understanding cause and effect is the foundation of any science, then that understanding is possible if it is seen as a process and not as a specific outcome, especially if the process and outcome are both context-specific.
Finally, we asked about the best means of determining causality or making causal inferences if it is both possible and desirable for social workers to do so. The answer is that both qualitative and quantitative methods can and should be used to fulfill specific roles in that process. Qualitative methods would be especially important in the early exploratory stages of scientific inquiry and for providing in-depth understanding of the causal chain and the context in which it exists. Quantitative methods would be especially important in the later confirmatory stages of scientific inquiry and for generalizing findings to other populations in other settings. Both methods are fundamental to a science of social work.
The integrated use of quantitative and qualitative methods is certainly not a novel concept. Haight (2010) , for instance, called for the integration of postpositivist perspectives of critical realism, with an emphasis on quantitative methods and research designs, and interpretative perspectives with an emphasis on qualitative or mixed research designs and methods. While “postpositivist research using quantitative methods can help to identify generally effective interventions and eliminate the use of harmful or ineffective interventions, … interpretist research using qualitative methods can enhance understanding of the ways in which cultural context (cause) interact with interventions, resulting in diverse outcomes (effects): ( Haight, 2010 , p. 102). Epstein's (2009) model of “evidence-informed practice” calls for the integrated use of evidence-based practice with its emphasis on standardized quantitative measures and RCT designs and reflective practice with its emphasis on qualitative observation. What is novel here is that the process of making causal inferences is not limited to quantitative methods or RCT designs.
Perhaps the greatest challenge we face in creating a science of social work is being faithful to the principles of scientific inquiry on one hand while simultaneously being responsive to the needs, activities, traditions, and multiple perspectives of our discipline. The diversity of these needs, activities, traditions, and perspectives reflect the complexity of the problems we seek to solve and the underlying factors that are responsible for those problems. This complexity makes it difficult to identify single or specific causes of single or specific effects. However, while this complexity may be viewed as an obstacle to the creation of a science of social work, it also represents a unique opportunity to create a science that acknowledges the importance of qualitative as well as quantitative methods, of practice-based evidence as well as evidence-based practice, and explanation grounded in social constructivism as well as logical positivism.
Funding : The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Support for this article was provided by grants from the William T. Grant Foundation (Grant no. 10648: L. Palinkas, PI), National Institute of Mental Health (P30-MH074678: J. Landsverk, PI; and R01MH076158: P. Chamberlain, PI), and National Institute on Drug Abuse (P30 DA027828-01-A1: C. Hendricks Brown, PI).
Declaration of Conflicting Interests : The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
- Alaggia R, Millington G. Male child sexual abuse: A phenomenology of betrayal. Clinical Social Work. 2008; 36 :265–275. [ Google Scholar ]
- Campbell DT. Legacies of logical positivism and beyond. In: Campbell DT, Russo MJ, editors. Social experimentation. Thousand Oaks, CA: Sage; 1999. pp. 131–144. [ Google Scholar ]
- Chorpita BF, Weisz JR, Daleiden EL, Schoenwald SK, Palinkas LA, Miranda J, et al. the Research Network on Youth Mental Health. Long term outcomes for the Child STEPs randomized effectiveness trial: A comparison of modular and standard treatment designs with usual care. Journal of Consulting and Clinical Psychology. 2013; 81 :999–1009. [ PubMed ] [ Google Scholar ]
- Cook TD, Campbell DT. Quasi-experimentation: Design and analysis issues for field settings. Chicago, IL: Rand McNally; 1979. [ Google Scholar ]
- Cresswell J, Plano Clark VL. Designing and conducting mixed method research. 2nd. Thousand Oaks, CA: Sage; 2011. [ Google Scholar ]
- Denzin NK. The logic of naturalistic inquiry. In: Denzin NK, editor. Sociological methods. Thousand Oaks, CA: Sage; 1978. [ Google Scholar ]
- Ell K, Katon W, Xie B, Lee PJ, Kapetanovic S, Guterman J, Chou CP. Collaborative care management of major depression among low-income, predominately Hispanic subjects with diabetes: A randomized controlled trial. Diabetes Care. 2010; 33 :706–713. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Epstein I. Promoting harmony where is commonly conflict: Evidence-informed practice as an integrative strategy. Social Work in Health Care. 2009; 48 :216–231. [ PubMed ] [ Google Scholar ]
- Glasgow RE, Magid DJ, Beck A, Ritzwoller D, Estabrooks PA. Practical clinical trials for translating research to practice: Design and measurement recommendations. Medical Care. 2005; 43 :551–557. [ PubMed ] [ Google Scholar ]
- Glisson C, Schoenwald SK, Hemmelgarn A, Green P, Dukes D, Armstrong KS, Chapman JE. Randomized trial of MST and ARC in a two-level evidence-based treatment implementation strategy. Journal of Consulting and Clinical Psychology. 2010; 78 :537–550. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Gutierrez L, GlenMaye L, DeLois K. The organizational context of empowerment practice: Implications for social work administration. Social Work. 1995; 40 :249–258. [ Google Scholar ]
- Haight WL. The multiple roles of applied social science research in evidence-based practice. Social Work. 2010; 55 :101–103. [ PubMed ] [ Google Scholar ]
- Heineman MH. The obsolete scientific imperative in social work research. Social Service Review. 1981; 55 :371–395. [ Google Scholar ]
- Hill AB. The environment and disease: Association or causation? Proceedings of the Royal Society of Medicine. 1965; 58 :295–300. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Hume D. A treatise of human nature: Reprinted from the original edition in three volumes and edited with an analytical index by L A Selby-Bigge. London, England: Oxford University Press; 1975. Original work published 1738. [ Google Scholar ]
- Karger HJ. Science, research, and social work: Who controls the profession? Social Work. 1983; 28 :200–205. [ Google Scholar ]
- Kleinbaum DG, Kupper LL, Morgenstern HL. Epidemiologic research: Principles and quantitative methods. Belmont CA: Lifetime Learning; 1982. [ Google Scholar ]
- Koeske GF. Moderator variables in social work research. Journal of Social Service Research. 1993; 16 :159–178. [ Google Scholar ]
- Kramer MS. Clinical epidemiology and biostatistics: A primer for clinical investigators and decision-makers. London, England: Springer-Verlag; 1988. [ Google Scholar ]
- Landsverk J, Brown CH, Chamberlain P, Palinkas LA, Horwitz SM. Design and analysis in dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and implementation research in health: Translating science to practice. New York, NY: Oxford University Press; 2012. pp. 225–260. [ Google Scholar ]
- Lewis H. Reasoning in practice. Smith College Studies in Social Work. 1975; 46 :3–15. [ Google Scholar ]
- Lofland J. Analyzing social settings: A guide to qualitative observation and analysis. Belmont, CA: Wadsworth; 1971. [ Google Scholar ]
- Lofland J, Lofland L. Analyzing social settings: A guide to qualitative observation and analysis. 3rd. Belmont, CA: Wadsworth; 1995. [ Google Scholar ]
- McNeece CA, Thyer BA. Evidence-based practice and social work. Journal of Evidence-Based Practice. 2004; 1 :7–25. [ PubMed ] [ Google Scholar ]
- Miles MB, Huberman AM. Qualitative data analysis: An expanded sourcebook. 2nd. Thousand Oaks, CA: Sage; 1994. [ Google Scholar ]
- National Association of Social Workers. Code of ethics. 2013 Retrieved from http://www.socialworkers.org/pubs/code/code.asp .
- Otto HU, Ziegler H. The notion of causal impact in evidence-based social work: An introduction to the special issue on what works ? Research on Social Work Practice. 2008; 18 :273–277. [ Google Scholar ]
- Padgett DK. Qualitative methods in social work research. 2nd. Thousand Oaks, CA: Sage; 2008. [ Google Scholar ]
- Palinkas LA, Fuentes D, Garcia AR, Finno M, Holloway IW, Chamberlain P. Inter-organizational collaboration in the implementation of evidence-based practices among agencies serving abused and neglected youth. Administration and Policy in Mental Health and Mental Health Services Research. 2014; 41 :74–85. [ PubMed ] [ Google Scholar ]
- Palinkas LA, Holloway IW, Rice E, Fuentes D, Wu Q, Chamberlain P. Social networks and implementation of evidence-based practices in public youth-serving systems: A mixed methods study. Implementation Science. 2011; 6 :113. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Palinkas LA, Horwitz SM, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in mental health services research. Psychiatric Services. 2011; 62 :255–263. [ PubMed ] [ Google Scholar ]
- Palinkas LA, Soydan H. Translation and implementation of evidence based practice. New York, NY: Oxford University Press; 2012. [ Google Scholar ]
- Palinkas LA, Weisz JR, Chorpita B, Levine B, Garland A, Hoagwood KE, Landsverk J. Use of evidence-based treatments for youth mental health subsequent to a randomized controlled effectiveness trial: A qualitative study. Psychiatric Services. 2013; 64 :1110–1118. [ PubMed ] [ Google Scholar ]
- Parascandola M, Weed DL. Causation in epidemiology. Journal of Epidemiology and Community Health. 2001; 55 :905–912. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Ragin CC. The comparative method: Moving beyond qualitative and quantitative strategies. Berkeley: University of California Press; 1987. [ Google Scholar ]
- Rosen A. Evidence-based social work practice: Challenges and promise. Social Work Research. 2003; 27 :197–208. [ Google Scholar ]
- Salmon WC. Causality and explanation. New York, NY: Oxford University Press; 1998. [ Google Scholar ]
- Sayer A. Realism and social science. Thousand Oaks, CA: Sage; 2000. [ Google Scholar ]
- Susser M. Causal thinking in the health sciences: Concepts and strategies of epidemiology. New York, NY: Oxford University Press; 1973. [ Google Scholar ]
- Tyson K. New foundations for scientific social and behavioral research: The heuristic paradigm. Boston, MA: Allyn & Bacon; 1995. [ Google Scholar ]
- Vidich AJ, Lyman SM. Qualitative methods: Their history in sociology and anthropology. In: Denzen NK, Lincoln YS, editors. Handbook of qualitative research. Thousand Oaks, CA: Sage; 2000. pp. 37–84. [ Google Scholar ]
- Weisz JR, Chorpita BF, Palinkas LA, Schoenwald SK, Miranda J, Bearman SK, et al. the Research Network on Youth Mental Health. Testing standard and modular designs for psychotherapy with youth depression, anxiety, and conduct problems: A randomized effectiveness trial. Archives of General Psychiatry. 2012; 69 :274–282. [ PubMed ] [ Google Scholar ]
- Wu Q, Palinkas LA, He X. An ecological examination of social capital effects on the academic achievement of Chinese migrant children. British Journal of Social Work. 2010; 40 :2578–2597. [ Google Scholar ]