• Search Menu
  • Advance articles
  • Editor's Choice
  • Author Guidelines
  • Submission Site
  • Open Access
  • About The British Journal of Social Work
  • About the British Association of Social Workers
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

Introduction, findings: overall trends, variations in research approach: social care groups, research type, sampling approach, statistical complexity and research approach.

  • < Previous

The Nature and Extent of Quantitative Research in Social Work: A Ten-Year Study of Publications in Social Work Journals

  • Article contents
  • Figures & tables
  • Supplementary Data

Michael Sheppard, The Nature and Extent of Quantitative Research in Social Work: A Ten-Year Study of Publications in Social Work Journals, The British Journal of Social Work , Volume 46, Issue 6, September 2016, Pages 1520–1536, https://doi.org/10.1093/bjsw/bcv084

  • Permissions Icon Permissions

The capacity to produce and understand quantitative research has been a major preoccupation for social work in recent years, with major funded initiatives for knowledge and skills development taking place, supported by key social work institutions. This has taken place against a general background of concern about social work's research capacity to underpin evidence-based practice. This is a complex issue both because, on the one hand, quantitative literacy itself presents challenges, sometimes at a basic level, while, on the other, advanced statistical understanding can be required to carry out research involving, and make sense of, quantitative data. Despite these initiatives, however, very little evidence exists about the nature, range and scope of quantitative research in social work. The research reported here draws on findings from a detailed analysis of 1,490 articles published over ten years in major British-based, but international in scope, refereed journals. The findings paint a complex picture on the scale, scope, complexity focus and demands presented by quantitative research findings. On the basis of these findings, issues discussed include the learning needs of the profession, professional/academic culture, ‘reader demands’ and research capacity.

Quantitative work seems to present many people in social work with particular problems. Sharland's (2009) authoritative review notes the difficulty ‘with people doing qualitative research not by choice but because it's the only thing they feel safe in’ (p. 31). Sharland's review is replete with concerns expressed within the social work academic community about quantitative competence. On admittedly limited evidence, social work does seem to have particular long-standing issues, unconfined by borders. Diverse strands suggest difficulties at all levels. Roysea and Romp (1992) found higher levels of maths anxiety in social work students compared with a cross-section of others enrolled in introductory statistics courses. Scourfield and Maxwell (2010) found only 5 per cent of British social work doctoral dissertations were primarily quantitative (and a further 30 per cent mixed-methods). Fisher (2003) , based on the 2001 RAE, questioned social work's research capacity (including quantitative work) to underlie the evidence-based requirements of social work practice, and both the RAE 2001 and 2008 Subject Overview ( HEFCE, 2001 , 2008 ) found good-quality but a small volume of quantitative research in social work.

It occurs, however, contextually within a perceived deficit in research generally. Sharland (2009, p. 3) comments that there is now the need for a ‘fundamental step change in breadth, depth and quality of the UK research base in social work … the knowledge base is currently inadequate’. One key contextual factor is funding, which is ‘limited, patchy and, fragmentary’ ( Sharland, 2009 , p. 4). There is limited investment in social work and social care research or infrastructure, which is very marked compared with, for example, primary care and nursing ( Marsh and Fisher, 2005 ). The research exercises are also a major contextual factor, in which the limited amount of quantitative social work research has been noted ( HEFCE, 2001 , 2008 ), though their terms of reference clearly do not discriminate against quantitative research per se. The social work academic workforce is, furthermore, small and practice experience rather than postgraduate qualification/Ph.D. of itself is the common prerequisite for appointment. Compared with the USA, furthermore, fewer social work graduates emerge with social science methodological expertise, and in fewer still is it quantitative—only comprising one in twenty Ph.D.s ( Scourfield and Maxwell, 2010 ). Hence, many consultants in Sharland's (2009) report emphasised particularly the need for training in quantitative methods.

Major funded initiatives to begin the process of improving the quantitative literacy in higher education social work of both staff and students have been undertaken. Three of these, supported by major institutions in the profession, have focused, respectively, on: developing a shared common curriculum for quantitative teaching aimed at creating a significant improvement in the level of quantitative knowledge and skills within social work ( McDonald et al. , 2009 ); developing the capacity of social work academics across the country to teach quantitative research; and extending the research capacity of early, mid-career and senior researchers in higher education social work ( Forrester et al. , 2012 a , 2012 b ). These initiatives were part of a wider strategy of improving research capacity ( Bywaters, 2008 ; Orme and Powell, 2008 ; Powell and Orme, 2011 ; MacIntyre and Paul, 2013 ) but the specificity of these developments indicates a particular concern for the quantitative.

This concern is expressed not just in terms of the integrity of the discipline, but also the evidence base for practice (EBP). Forrester (2010) identified key facets of EBP: enabling social work to provide a convincing vision of what it does and the contribution it makes; improving social work's capacity to defend itself against the firestorm of criticism on child deaths; enabling practitioners to make decisions in a more rigorously informed fashion, rather than simply experience or even idiosyncratic view/beliefs; encouraging excellence in practice; enabling justifying actions according to best evidence, rather than ‘common sense’, while nevertheless recognising the individualised and unique nature of work with specific service users. As one consultant comments in Sharland (2009, p. 54) that quantitative methods bring particular qualities to the table, ‘it tells them how frequently certain problems happen, how widespread they are, whether something specific makes a difference. They're not the only part of the equation … . But they do concentrate the mind on outcomes’.

I don't know whether there is an anti-quantitative bias in British sociology. I can think of a few people who in my judgment have an irrational dislike of numbers, but …. For what it's worth my impression is that, en masse , fear and ignorance (rather than irrational antipathy) are mixed together in roughly equal parts.

Quantitative literacy—or numeracy—may be defined as the ability ‘to think and express oneself effectively in quantitative terms’ ( Free Dictionary , 201 5 ) and is widely regarded as a central feature of graduateness. The Mathematical Association of America (2015) states that ‘Colleges and Universities should treat quantitative literacy as a thoroughly legitimate and even necessary goal for baccalaureate graduates’. The International Bench-Marking Review ( Parker et al. , 2008 ) confirms this in a collaboration of eight countries (including the UK) in identifying minimum graduate expectations for quantitative literacy in social sciences (this was ‘an effort by the ESRC, working with the Funding Councils, to develop an integrated strategy aiming to improve the supply of quantitatively trained social scientists’ (p. 7)).

Quantitative literacy, the Review suggests, is vital for both academic and practical purposes—with employers demanding more numerate graduates and research councils concerned about the flow of more advanced trained (post)-graduates and career staff in the evolution of the academic community (p. 29). Social work—which ‘janus-like’ faces both ways—qualifies for both its evidence-based practice needs and its academic-based development of professionally relevant knowledge. The urgency of this need is such that the Review argues not just for stand-alone modules, but for integration of numeracy across the curriculum together with its reinforcement and extension as curriculum demands advance.

This is not straightforward. Quantitative literacy has its numerical equivalent of dyslexia—dyscalculia. Specific numerical difficulties can arise from this—characterised by impairments in learning basic arithmetic facts, processing numerical magnitude and performing accurate and fluent calculations. This is an extreme level of numerical disability, perhaps experienced by 5 per cent of the population, though some degree of numerical difficulty may be experienced by 25 per cent of the population ( British Dyslexia Association, 2015 ). There is, then, a spectrum of increasing or decreasing difficulty with numeracy.

To this may be added matters of ‘methodological principle’. Some social scientists are sceptical about the use of quantitative methods, at times even dismissing them as misrepresenting social reality. The commitment to quantitative research, it appears, may in part have been a casualty of what might be termed disputes surrounding ‘methodological ideology’. By this is meant a belief that some methods represent the social world more adequately, or even exclusively more adequately than other methods. Age-old disputes about the efficacy of quantitative versus qualitative research are often covers (not very well disguised) exclusive commitment to positivism or interpretivism, the former assumed exclusively to underlie quantitative and the latter qualitative research ( Bryman, 2004 ). That qualitative research may be exclusively interpretivist and quantitative exclusively positivist is a misrepresentation of the potential of both. Furthermore, the implication that there are only two epistemologies in the social sciences is itself a travesty, where other philosophies, such as conventionalism and in particular realism—the latter now a major ‘player’ in social science research ( Pawson and Tilley, 1997 )—are available.

Other criticisms include disputing the validity of deductive logic characterising much quantitative work, the impossibility of value free research, the need for depth, flexibility and engagement, rather than being a ‘detached observer’ (classically championed by Geertz (1973) ) and the ‘problem’ of incorporating verstehen ( Weber, 1904/49 ) in quantitative work.

Against these are, of course, defences and indeed many social workers have a pragmatic or epistemologically realist-based engagement with multiple methods. Instances, for example, of the incorporation of meaning into major quantitative work can be cited ( Brown and Harris, 2011 )—enabling measurement and meaning. Indeed, qualitative researchers are not above criticism themselves by, for example, insidiously and invalidly incorporating measurement into their work (using terms like ‘some’, ‘a few’, ‘more’ and ‘many’) ( Silverman, 2000 ).

It is not the purpose here to rehearse the full range of arguments for and against the use of quantitative research. Rather, it is to note that there remain, for some, insuperable objections to the validity of the use of quantitative work in social research and this may contribute to its apparently difficult position.

All these features may be considered to underlie the apparently problematic status of quantitative research in social work. However, there is little clear evidence regarding this status and, indeed, the extent—if that is indeed what it is—of its problematic nature. This paper seeks to develop our understanding of the nature range, significance and development of quantitative research in social work. It does so by focusing on the publication of quantitative research in three major generic British-based international social work journals over a ten-year period: the British Journal of Social Work (BJSW), the European Journal of Social Work (EJSW) and the Journal of Social Work (JSW) .

The research reported here combines elements of a priori concerns, with an approach reliant on the nature and content of the articles themselves, which may only be discovered in the process of review. On the one hand, there are elements that are of interest because of already existent issues relating to quantitative research: To what extent is social work research characterised by quantitative methods (arising from a perceived limitation in its production) ( HEFCE, 2008 )? To what extent are articles characterised by numerical representation, say in the form of tables (arising from a perceived difficulty in processing and analysing numerical data)? What degree of complexity characterises the data presented (how well understood and sophisticated are statistical and numerical methods in social work)? And so on.

On the other hand, our approach relied on those data and their presentation that were available. It would be no good, for example, focusing on narrative presentation if this was not evident in the publications or, in principle, tabular presentation of data if these were rare. The result is that the approach owed as much to some key issues in relation to the production and use of quantitative data but mediated, and at times determined, by its actual presentation in the journals.

We did not seek to evaluate individual studies so much as examine key facets (though this inevitably entails analysis of the literature as a whole) of quantitative publications. It differs from systematic reviews, whose purpose is ‘to sum up the best available research on a specific question’ ( Campbell Collaboration, 2015 ). We are not seeking to evaluate the quality of the research per se, but to examine certain key characteristics; we are not focusing on a substantive issue in a particular area, but all relevant substantive areas; our focus was on (broad) method rather than categorical area.

The journals were in certain respects self-selecting. They were generic, enabling the examination of research across the discipline (as opposed to specialist journals such as Social Work Education or Child and Family Social Work , which focused on particular aspects of social work). They were British-based, but at the same time international in scope and standing (the range of countries from whom contributions were taken was extensive, numbering fifty-eight in total). This range indicates, furthermore, their high reputation within the discipline (which is well known), confirmed by their impact factor and standing within the Web of Science (BJSW 1.162, five-year impact factor 1.484; EJSW 0.352; JSW 0.709 (2014 Thomson Reuters, 2014 Journal Citation Reports); contrast this with, for example, major American journals such as the Social Services Review (0.791) and Social Work (0.877) (Impact Factor Search (Thomson Reuters, 2014 Journal Citation Reports) 2015, www.impact-factor.org/ ). None of these journals expresses any preference for any particular research methodology—including generally quantitative or qualitative research. The published scope of the EJSW states contributions ‘include theoretical debates, empirical studies, research notes, country perspectives, and reviews’; the BJSW covers ‘papers reporting research, discussing practice, and examining principles and theories’; and the JSW ‘is a forum for the publication, dissemination and debate of key ideas and research in social work. The journal aims to advance theoretical understanding, shape policy, and inform practice, and welcomes submissions from all areas of social work’.

All articles published in these journals were examined in detail over a ten-year period (2005–14 inclusive). The ten-year span was chosen because it enabled the examination of a substantial library of work (1,490 articles), it enabled the examination of trends longitudinally and also, in particular, encapsulated the commentary of the limited quantitative input to RAE 2008 (has there been any change?). It was also a period of considerable expansion in these journals' size, providing a clear opportunity to examine the way this affected the contours of different research types.

All contributions were included—other than editorials and book monograph reviews, together with conference reports and bulletins—broadly definable as ‘original articles’ including critical reviews and commentaries. A simple, but for our purposes analytically efficacious, distinction was made between research types. Quantitative research is the ‘formal, objective, systematic process in which numerical data are used to obtain information about the world’ ( Burns and Grove, 2005 , p. 23). Articles were defined as quantitative where all data produced were numerical. The key distinction, for our purposes, from qualitative research lies in the data produced—the latter involving language rather than numerical data. Of course, diverse methodologies may be used in either—and the articles examined manifested considerable diversity, but the key for our purposes lies in the distinction between exclusively numerical and exclusively language-based data. Mixed quantitative and qualitative articles were those which manifested both quantitative and qualitative data, in which the design entailed methods to enable that mixed data production to occur. All articles defined as empirical research had a research protocol/methods element. These approaches, collectively understood here as empirical research, may be distinguished from non-empirical articles where the work focused on the production of ideas, critical appraisal and review. One other category emerged—the project overview, which was a distilled summary description of research that had been undertaken rather than detailed report of research findings.

The findings very much rely on common features to all articles. The dimensions reported here were evident in all, or an overwhelming proportion of, cases. Thus, we can report on the use or non-use of tabulated data in all cases but (for example) there is a minority of cases where the sample size was not given, though those reported were in such numbers that we are nevertheless able to make clear observations on trends. Those most likely to lack a ‘sample size’ (in terms of number of participants or cases) involved ethnography (e.g. Ellis, 2007 ), and in such cases it may matter less because of the variety of techniques used (such as documentary analysis and observation as well as interviews/focus groups).

All the articles were examined in detail and classified along a range of categories consistent to all the articles. These included reported research design; sampling method and size; presence, absence and size of tables; use and analysis of statistics; service focus. Data involving simple social and demographic characteristics of a particular sample, which were often a feature of otherwise completely qualitative articles, were not included in the definition of quantitative because they were simple basic description rather than a feature of the research approach.

Over the ten years, 1,490 papers falling into our category of ‘original articles’ were published. Of these, just over half involved empirical research, in the form of quantitative, qualitative or mixed approaches (see Table 1 ). Quantitative research was a minority—just over one in seven articles was quantitative. This rises to just under a quarter when adding mixed approaches to include all articles that had a substantial numerical element. Qualitative research—as suggested by the research exercise reviews (of those articles submitted for assessment only) ( HEFCE, 2001 , 2008 )—was a far more significant feature, comprising nearly one-third (twice that of quantitative research) of published work. These differences, while obvious, are highly statistically significant.

The picture is rather more nuanced than these overall data suggest. Another key feature was the increase in published work. The number of articles increased overall from 114 in 2005 to 227 in 2014, and empirical articles grew from fifty-three in 2005 to 150 in 2014. Figure  1 shows this progression in terms of research type. Broadly, this shows the absolute quantity of quantitative research to have increased markedly with the expansion of publications as a whole, but to have continued to lag some way behind qualitative research. Of the three approaches, mixed quantitative and qualitative showed the smallest increase.

Research type by year.

Research type by year.

Table  2 shows articles publishing research in social care provision by service group. This constituted nearly two-thirds of published empirical research (533/838 articles). Other major areas were those with a social work professional and management focus and student/education, respectively, constituting ninety-five and ninety-four articles in total. Table  2 shows children and young people by some margin to have been the dominant social care group focus for empirical articles overall. This was greater than mental health and older people by a factor of four, and well over twice that of other adult services. This, we may safely suggest, is a reflection of the degree of public concern and profile, relatively speaking, of social work involvement with each social care group. Different social care groups elicited different methodologies, with quantitative methods featuring significantly more in mental health than other service groups, particularly older people and other adult services, whose emphasis was much more qualitative.

Research type by journal

All publications: χ 2 = 61.49; df = 8; p < 0.001.

Empirical research: χ 2 = 17.86; df = 4; p = 0.001.

Research type by service/social care group

χ 2 = 17.9; df = 6; p = 0.006.

Percentage presented as proportion of research articles focusing on these service user groups.

Sample size was defined as the number of participants or ‘cases’ (which could be, for example, service users, department cases or, as here, articles examined). Sometimes, research involved multiple stages or elements and the total size, including largest element, was taken as the sample (e.g. Schofield et al. , 2007 ). Some articles simply gave no sample size (e.g. Bowey et al. , 2005 , which simply enumerated focus groups). Others, such as ethnographic research, might describe the work in such a way that sample size could not meaningfully be given (naming number of focus groups or frequency of observations). Others did not lend themselves to it, such as content analysis of TV episodes ( Henderson and Franklin, 2007 ). Still others used total population of many countries, through official data, expressing data only in rates ( Pritchard et al. , 2013 ).

Sample size could be meaningfully identified in 750 of 838 empirical articles. It is no surprise, perhaps, to find that sample size varied markedly by research approach. Mean sample size for quantitative research was 2,761, for mixed quantitative and qualitative 367 and for qualitative forty-two. There was though considerable variance with standard deviations, respectively, of 18,383,1117 and 77. The smallest sample sizes were, respectively, eleven, six and one, and the largest were 255,813, 10,000 and, remarkably, 785, a qualitative study involving setting up a Turing-type chat room with 785 children ( Cahal et al. , 2014 ).

Interestingly, there was no significant difference in sampling approach between quantitative and mixed quantitative/qualitative articles (Table  3 ). Despite the quantitative element in both approaches, over two-thirds were non-probability/non-representative samples. Even here, however, where randomisation was used for initial sample selection, this was mainly not sustainable on closer inspection. Thirty-one studies employed a random approach, but twenty-one possessed no method of random substitution where a non-response or participant refusal occurred. This meant that the outcome for these studies was self-selection. Examples include Bergmark and Lundström (2011) , Leung and Chan (2014) and Koritsas et al. (2010) , where response rates from an initial random sample were 71 per cent, 36 per cent and 21 per cent, respectively.

Sampling method by research type—quantitative data

χ 2 test: not significant. * In two cases, it was not possible to determine sampling method.

Nearly three-fifths of the non-probability samples and two-fifths of the quantitative and mixed quantitative/qualitative total (138) were case studies or multiple case studies. These generally possessed some (non-statistical) typicality, for example where a group of children's centres or children's services teams were examined. A further forty-two (18 per cent of the non-probability and 12 per cent of the total quantitative and mixed studies) were purposive samples. Experimentation formed a small minority totalling only ten articles (averaging one per year), including six randomised control trials.

Table  4 shows the level of statistical sophistication apparent in tabular presentation of data. This was much lower in mixed research than quantitative, with over 70 per cent of the former involving only narrative including numerical data or simple descriptive statistics (compared with under a quarter of quantitative articles). Indeed, over half of the quantitative articles included multivariate analysis. Table  5 shows these differences were reflected in the tabular complexity apparent with different levels of statistical analysis, with incremental increases in cell numbers from descriptive through bivariate to multivariate analysis.

Research type by tabular statistical sophistication

χ 2 = 86.77; df = 3; p < 0.001.

Tabular complexity—mean cell number by tabular statistical type

This is the first attempt to get a detailed picture of quantitative research in social work, focusing on major British-based international generic social work journals. Its scope is considerable, spanning ten years, two (British) research exercises and detailed analysis of nearly 1,500 articles. The international significance furthermore is clear, with articles stemming from over fifty countries. We should nevertheless be aware of its limitations. Specialist journals, such as childcare or educational, were not included. The focus is on articles rather than books, reports and unpublished work, although these journals, with known reviewing standards, might be expected to create greater certainty in quality and rigour. And of course it covered a ten-year period, rather than longer (although that might be considered significant in itself) and British- based (though international in scope) journals.

Quantitative research forms not just a minority of publications as a whole (just under one-seventh), but a minority of empirical research articles also, the production being less than half that of qualitative research and just over a third of ideas/theory/reviews. As such, the fears of those who consider quantitative research to have a far lower profile than alternatives appear confirmed. Even where mixed methods are included—and these were variable in the amount of numerical data included—the total was noticeably lower than qualitative work.

There is clear evidence, though, of an increase in research output as a whole—and hence the capacity of the discipline to provide an EBP—over the ten-year period. This concern expressed in response to research exercise reports ( HEFCE, 2001 , 2008 ; Fisher, 2003 ) seems to have elicited a response from these journals at least through an increase in the size of volumes and frequency of publication. We do not include here—we do not have data on—the number of submissions made, but it appears highly likely that the increase in journal output is, at least in part, a response to an increase in research output. Quantitative research, it is clear, is part of this expanding research output. Mixed methods—with its quantitative elements—have shown a much less marked increase than either quantitative or qualitative research. Of course, much depends on the substantive usefulness of the articles themselves for practice (a complex issue in itself), but this increase provides a strong indication that the capacity of the research community to provide an EBP has improved.

So quantitative research is increasing in size while remaining relatively small in quantity compared with qualitative research. Does this matter? This depends on the purpose of the research. Where, for example, measurement is important—What is the scale of a particular issue? What degree of change occurred from intervention? What is the performance of one approach relative to another?—quantitative approaches are of considerable significance (precisely because of their numerical focus). Hence, in certain respects, the issue of whether there is enough quantitative research resolves itself into another issue: what is it that we need to know? The adequacy of the scale of quantitative research depends on its (subject) focus (relating to informational needs for practice) and the extent to which the particular qualities bestowed by numerical data are required (this depends on priorities and positions and may change from time to time, e.g. with highly public failures).

The adequacy of the evidence base raises this issue of focus and it is clear that, although these are generic journals, different social care groups fared very differently in the scale of research involving quantitative data. While children and families outstripped other social care groups by some margin in absolute number of articles, research on mental health showed proportionately the highest profile for quantitative work. One explanation to this may be its close proximity to the discipline of psychiatry, which itself has a strong emphasis on quantitative work. If so, this would indicate the methodological orientations, including training, of the researchers themselves, rather than just the kinds of data required may play an important part in the degree of emphasis on quantitative methods. This might indicate a simple formulation: the more researchers are trained in quantitative methods (or more broadly the more they are valued) in any particular discipline, the more quantitative research will be produced. This is consistent with Sharland's (2009, pp. 17, 100) observations on the emphasis on qualitative methods (where research training is undertaken at all) in UK social work in higher education, notably by comparison with the USA. Quantitative research production, then, would be a function not just of the evidence-based needs of the profession, but of the particular methodological culture pervading it (if true, the data here might suggest the methodological culture of social work is predominantly qualitative).

Whether this emphasis on children and families is a desirable state of affairs depends on the relative importance attached to these social care groups. An argument might be made that the proximity of social care in mental health to psychiatry—with a huge output of its own—means that its output in these journals should not be a matter of concern. The same probably cannot be said, in particular, for older people and these findings indicate their ‘Cinderella’ status in social work research. Indeed, quantitative research has a particularly low profile in this area, with an average of only just over two articles per year on older people appearing in these journals. Quite why this may be the case is not clear: it may reflect the methodological preferences of those interested in older people; or the kinds of issues addressed; or indeed how researchers believe older people and their carers might react to different approaches (might they respond better to the more conversational style of qualitative interviews?).

What, though, of the intellectual accessibility of these articles? Do they pose particular challenges, or require levels of statistical sophistication in the reader? A majority of quantitative articles are not for the statistically faint-hearted. Over half involved multivariate analysis, and we should not assume that bivariate analysis is necessarily straightforward to read for those without statistical training (a correlation matrix involving, for example, seven or eight variables can provide a potentially bewildering data set). This is exacerbated by the complexity of the tabular data presented. There was a practically linear growth in the mean number of cells present as we move from descriptive through bivariate to multivariate analysis. A full understanding of these articles does not therefore just require a degree of technical acumen, but the basic capacity to cope with large amounts of numerical data.

It is clear that mixed methods are not as challenging in these respects. The large majority of such articles that involved narrative or only descriptive statistics in the tables indicates a need in the reader for considerably less quantitative literacy and statistical sophistication. Indeed, the mixed approach provides the reader, on the whole, with a greater ‘break’ while reading the articles from the numeracy demands provided by the profile given to qualitative findings as an aspect of the data provided.

The sampling approach, finally, provides some food for thought on the evidence-based adequacy of quantitative articles (to what extent does social work have the research capacity to provide an adequate professional evidence base?). It is arguable that total probability and representative samples together with the use of official data and statistics (which frequently use total or probability techniques) provide the opportunity for generalising and thus a strong evidence base. However, matters in this respect are more complex than they appear. These categories of research form a minority of articles published. But, even where present, the methodology may undermine their potential (at least to some degree). This is particularly the case where random sampling, because of limited response rates and no provision for random substitution, means the sample is self-selected rather than probability. That does not mean such articles have no use, but it does mean they cannot make the same claims to generalisability bestowed by a probability sample.

Even where fully followed through, such sampling approaches have limitations. Publications with the international scope encompassed by these journals inevitably have submissions from profoundly different areas and have an equally diverse readership. If an article uses a probability sample from, say, Israel, to what extent is it applicable to, say, Norway? This, furthermore, assumes that it covers a topic of relevance to both (say educating social workers or fostering provision). Some of the topics (e.g. practising in a war zone or societies with profound schisms) may have relevance for some areas (say Israel and Northern Ireland), but not others (say Belgium and Scotland).

It might be that some non-probability approaches to sampling may be equally or even more efficacious. Where, for instance, purposive sampling or multiple case studies—where these are based on typicality—are undertaken, an argument can be made that they are representative in a non-statistical way. It becomes possible to examine these articles for characteristics which will enable the lessons learned from them to be applied in areas other than where they were carried out. This may well account for the profile given to case studies and purposive sampling in the research undertaken.

Finally, the small number of experiments is worthy of note. Some of those of a quantitative disposition will regard these, and particularly the randomised control trial, to be the ‘gold standard’ in evaluative research where specified interventions are undertaken ( Sheldon and MacDonald, 2008 ). If that is the case (and there is scope for debate here), then very little effort is devoted to the use of these methods. In some cases, the nature of the research issue is such that methods other than control trials can be efficacious in seeking to examine outcomes. Multivariate analysis can, in some circumstances, be used for this purpose, and we can find articles which do not use experimental designs which nevertheless identify the key factors in those outcomes. Such is the case with Sheppard's (2009 a , 2009 b ) longitudinal design identifying two coping approaches—‘Active Coping’ and ‘Seeking Social Support’—as key to positive outcomes for mothers coping in adversity with child-care problems. However, it is the case that social work is a form of intervention—something which those of experimental disposition would regard to be particularly susceptible to properly designed control trials to establish outcomes and, in these journals, experimentation was a rarely published research design.

In the discipline of social work, quantitative research poses particular challenges because of its demands on quantitative literacy. Some of these have been recognised and it is clear (e.g. in the increased output of quantitative articles) that responses have been made. It may well be, however, that further efforts need to be made in this and other directions. There can be no doubt that, if quantitative findings are to be properly understood by their potential audience, there is a need for a widespread quantitative literacy amongst social workers and social work academics (not to mention politicians). It might be doubted that such quantitative literacy is currently sufficiently widely present in social work.

Bergmark A. , Lundström T. ( 2011 ) ‘ Guided or independent? Social workers, central bureaucracy and evidence-based practice ’, European Journal of Social Work , 14 3 , pp. 323 – 37 .

Google Scholar

Bowey L. , McGlaughlin A. , Saul C. ( 2005 ) ‘ Assessing the barriers to achieving genuine housing choice for adults with a learning disability ’, British Journal of Social Work , 35 , pp. 139 – 48 .

British Dyslexia Association ( 2015 ) ‘ Dyscalculia ’, available online at www.bdadyslexia.org.uk/dyslexic/dyscalculia .

Brown G. W. , Harris T. O. ( 2011 ) Social Origins of Depression , 2nd edn, London , Routledge .

Google Preview

Bryman A. ( 2004 ) Social Research Methods , Oxford , Oxford University Press .

Burns N. , Grove S. K. ( 2005 ) The Practice of Nursing Research: Conduct, Critique, and Utilization , 5th edn, St Louis , Elsevier Saunders .

Bywaters P. ( 2008 ) ‘ Learning from experience: Developing a research strategy for social work in the UK ’, British Journal of Social Work , 38 5 , pp. 936 – 52 .

Cahal C. M. , Mason C. , Rashid A. , Walkerdine J. , Rayson P. , Greenwood P. ( 2014 ) ‘ Safeguarding cyborg childhoods: Incorporating the on/offline behaviour of children into everyday asocial work practice ’, British Journal of Social Work , 44 3 , pp. 596 – 614 .

Campbell Collaboration ( 2015 ) ‘What is a systematic review?’, available online at www.campbellcollaboration.org/what_is_a_systematic_review/ .

Ellis K. ( 2007 ) ‘ Direct Payments and social work practice: The significance of “street-level bureaucracy” in determining eligibility ’, British Journal of Social Work , 37 3 , pp. 405 – 22 .

Fisher M. ( 2003 ) ‘ Social work research and the 2001 Research Assessment Exercise: An initial overview ’, Social Work Education , 21 3 , pp. 71 – 80 .

Forrester D. ( 2010 ) ‘ The argument for evidence-based practice in social work ’, Community Care , 18 June, available online at www.communitycare.co.uk/2010/06/18/the-argument-for-evidence-based-practice-in-social-work/ .

Forrester D. , Devaney J. , Carpenter J. , Tester B . ( 2012a ) ‘Making social work count: A national curriculum development programme pioneered in three universities’, ESRC-Funded Initiative .

Forrester D. , Devaney J. , Carpenter J. , Scourfield J. , Tester B. ( 2012b ) ‘Increasing the capacity for quantitative teaching in social work undergraduate courses’, ESRC-Funded Initiative .

Free Dictionary ( 2015 ) Quantitative Literacy, available online at www.thefreedictionary.com/Quantitative+literacy .

Geertz C. ( 1973 ) ‘ Thick description: Towards an interpretive theory of culture ’, in Geertz C. (ed.), The Interpretation of Cultures: Selected Essays , New York , Basic Books .

Henderson L. , Franklin B. ( 2007 ) ‘ Sad not bad: Images of social care professionals in popular UK television drama ’, Journal of Social Work , 7 2 , pp. 133 – 53 .

Higher Education Funding Council (HEFCE) ( 2001 ) RAE 2001 Overview Report on UOA41 Social Work , available online at www.hero.ac.uk/rae/overview/docs/UoA41.doc .

Higher Education Funding Council (HEFCE) ( 2008 ) RAE 2008 Subject Overview Report, Panel J, Social Work and Social Policy , available online at www.rae.ac.uk/pubs/2009/ov/ .

Koritsas S. , Coles J. , Boyle M. ( 2010 ) ‘ Workplace violence towards social workers: The Australian experience ’, British Journal of Social Work , 40 1 , pp. 257 – 71 .

Leung L.-c. , Chan K.-w. ( 2014 ) ‘ Understanding the masculinity crisis: Implications for men's services in Hong Kong ’, British Journal of Social Work , 44 2 , pp. 214 – 33 .

MacIntyre G. , Paul S. ( 2013 ) ‘ Teaching research in social work: Capacity and challenge ’, British Journal of Social Work , 43 4 , pp. 685 – 702 .

Marsh P. , Fisher M. ( 2005 ) Report 10: Developing the Evidence Base for Social Work and Social Care Practice , London , Social Care Institute for Excellence .

Mathematical Association of America ( 2015 ) ‘Quantitative reasoning for college graduates: A complement to the standards’, available online at www.maa.org/programs/faculty-and-departments/curriculum-department-guidelines-recommendations/quantitative-literacy/quantitative-reasoning-college-graduates .

McDonald L. , Forrester D. , Shemmings D. , White S. , Bernard C . ( 2009 ) ‘[Further] Development of skills of mid career social work academics: [Supportive learning sets, mixed methodologies and research placements]’, ESRC-Funded Initiative .

Mills C . ( 2013 ) ‘ Is there an anti-quantitative bias in British sociology? ’, available online at http://oxfordsociology.blogspot.co.uk/2013/03/is-there-anti-quantitative-bias-in.html .

Orme J. , Powell J. ( 2008 ) ‘ Building research capacity in social work: Process and issues ’, British Journal of Social Work , 38 5 , pp. 988 – 1008 .

Parker J. , Dobson A. , Scott S. , Wyman M. ( 2008 ) International Bench-Marking Review of Best Practice in the Provision of Undergraduate Teaching in Quantitative Methods in the Social Sciences , Swindon , ESRC, available online at www.esrc.ac.uk/_images/International_benchmarking_undergraduate_quantitative_methods_tcm8-2725.pdf .

Pawson R. , Tilley N. ( 1997 ) Realistic Evaluation , London , Sage .

Powell J. , Orme J. ( 2011 ) ‘ Increasing the confidence and competence of social work researchers: What works? ’, British Journal of Social Work , 41 8 , pp. 1566 – 85 .

Pritchard C. , Davey J. , Willaims R. ( 2013 ) ‘ Who kills children? Re-examining the evidence ’, British Journal of Social Work , 43 7 , pp. 1403 – 38 .

Roysea D. , Romp E. ( 1992 ) ‘ Math anxiety: A comparison of social work and non-social work students ’, Journal of Social Work Education , 28 3 , pp. 270 – 7 .

Schofield G. , Thoburn J. , Howell D. , Dickens J. ( 2007 ) ‘ The search for stability and permanence: Modelling the pathways of long-stay looked after children ’, British Journal of Social Work , 37 4 , pp. 619 – 42 .

Scourfield J. , Maxwell N. ( 2010 ) ‘ Social work doctoral students in the UK: A web-based survey and search of the index to theses ’, British Journal of Social Work , 40 2 , pp. 548 – 66 .

Sharland E. ( 2009 ) Strategic Advisor for Social Work and Social Care Research: Main Report to the Economic and Social Research Council Training and Development Board , Swindon , ESRC .

Sheldon B. , MacDonald G. ( 2008 ) A Textbook of Social Work , London , Taylor and Francis .

Sheppard M. ( 2009a ) ‘ High thresholds and prevention in children's services: The impact of mothers' coping strategies on outcome of child and parenting problems: Six month follow up ’, British Journal of Social Work , 39 1 , pp. 46 – 64 .

Sheppard M. ( 2009b ) ‘ Social support use as a parental coping strategy: Its impact on outcome of child and parenting problems—A six-month follow-up ’, British Journal of Social Work , 39 , pp. 1427 – 46 .

Silverman D. ( 2000 ) Doing Qualitative Research: A Practical Handbook , London , Sage .

Weber M . ( 1904/49 ) ‘ Objectivity in social science and social policy ’, in Shils E. , Finch H . (eds), The Methodology of the Social Sciences , New York , Free Press .

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1468-263X
  • Print ISSN 0045-3102
  • Copyright © 2024 British Association of Social Workers
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

University of Bristol Logo

  • Help & Terms of Use

Quantitative Research Methods for Social Work: Making Social Work Count

  • School for Policy Studies

Research output : Book/Report › Authored book

  • Quantitative Research Methods
  • Social Work
  • Persistent link

Fingerprint

  • Social Work Social Sciences 100%
  • Quantitative Research Method Social Sciences 100%
  • Books Social Sciences 40%
  • Research Social Sciences 40%
  • Social Scientists Social Sciences 40%
  • Understanding Social Sciences 40%
  • UK Social Sciences 40%
  • Skills Social Sciences 40%

T1 - Quantitative Research Methods for Social Work

T2 - Making Social Work Count

AU - Teater, Barbra

AU - Devaney, John

AU - Forrester, Donald

AU - Scourfield, Jonathan

AU - Carpenter, John

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Social work knowledge and understanding draws heavily on research, and the ability to critically analyse research findings is a core skill for social workers. However, while many social work students are confident in reading qualitative data, a lack of understanding in basic statistical concepts means that this same confidence does not always apply to quantitative data.The book arose from a curriculum development project funded by the Economic and Social Research Council (ESRC), in conjunction with the Higher Education Funding Council for England, the British Academy and the Nuffield Foundation. This was part of a wider initiative to increase the numbers of quantitative social scientists in the UK in order to address an identified skills gap. This gap related to both the conduct of quantitative research and the literacy of social scientists in being able to read and interpret statistical information. The book is a comprehensive resource for students and educators. It is packed with activities and examples from social work covering the basic concepts of quantitative research methods – including reliability, validity, probability, variables and hypothesis testing – and explores key areas of data collection, analysis and evaluation, providing a detailed examination of their application to social work practice.

AB - Social work knowledge and understanding draws heavily on research, and the ability to critically analyse research findings is a core skill for social workers. However, while many social work students are confident in reading qualitative data, a lack of understanding in basic statistical concepts means that this same confidence does not always apply to quantitative data.The book arose from a curriculum development project funded by the Economic and Social Research Council (ESRC), in conjunction with the Higher Education Funding Council for England, the British Academy and the Nuffield Foundation. This was part of a wider initiative to increase the numbers of quantitative social scientists in the UK in order to address an identified skills gap. This gap related to both the conduct of quantitative research and the literacy of social scientists in being able to read and interpret statistical information. The book is a comprehensive resource for students and educators. It is packed with activities and examples from social work covering the basic concepts of quantitative research methods – including reliability, validity, probability, variables and hypothesis testing – and explores key areas of data collection, analysis and evaluation, providing a detailed examination of their application to social work practice.

KW - Quantitative Research Methods

KW - Social Work

M3 - Authored book

SN - 978-1-137-40026-0

BT - Quantitative Research Methods for Social Work

PB - Palgrave Macmillan

CY - London

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Module 3 Chapter 3: Overview of Quantitative Traditions

Just as there were multiple traditions under the umbrella heading of qualitative research approaches, different types of quantitative research approaches are used in social work, social, and behavioral sciences. In this chapter you are introduced to:

  • cross-sectional research
  • longitudinal research
  • descriptive research
  • correlational research
  • experimental research.  

Cross-Sectional and Longitudinal Research

Research for understanding diverse populations, social work problems, and social phenomena might be conducted either cross-sectionally or longitudinally. As you will see, the decision about using a cross-sectional research  or longitudinal research design has a lot to do with the nature of the research questions being addressed, the type of data available to investigators, and a weighing of the advantages and disadvantages of either approach.

Cross-Sectional Research Design . A cross-sectional study involves data collection at just one point in time. It lends itself to research questions that do not relate to change over time but are descriptive, exploratory, or explanatory about a specific point in time. In a study addressing an epidemiological or other descriptive question, for example, the cross-sectional design allows investigators to describe a phenomenon at a single time point. An example might be a study of the prevalence of individuals’ exposure to gun violence in their communities and ratings of collective efficacy for the community (Riley et al., 2017). Survey data were collected from participants at one point in time (2014) in two communities with high rates of violent crime. Collective efficacy was conceptualized as each individual’s perception about formal and informal cohesion in their neighborhood, the strength of social bonds among neighbors, and the ability of community members to achieve public order through informal mechanisms. The team hypothesized an inverse relationship between these two factors: that higher rates of gun violence exposure would correlate with lower scores for perceived collective efficacy. Responses on 153 surveys were analyzed, demonstrating that exposure to gun violence was very high (95% had heard gunshots, 21% heard gunshots at least weekly, over 1/3 had been present when someone was shot). The hypothesized relationship between individuals’ higher exposure to gun violence and their lower perceptions of community’s collective efficacy was observed.

the impact of quantitative research in social work

Cross-sectional research designs are also used in comparison studies where a phenomenon’s expression is compared for two or more groups—groups differing in race or ethnicity, sex or gender identity, geographical location, or other social demographic factors, for example. These studies might also compare groups of individuals who received a certain intervention and those who did not. Or, this design might compare groups of individuals who differ in severity or other characteristics of social work or behavioral health problem.  A cross-sectional design was used to answer a question concerning how bipolar disorder might differ from major depressive disorder outside of major mood episodes (Nilsson, Straarup, & Halvorsen, 2015). Investigators compared 49 individuals with bipolar disorder to 30 participants with major depressive disorder at a point in time when each was in remission. Participants were compared on 15 “early maladaptive schemas” which included their self-ratings of abandonment, mistrust, social isolation, defectiveness, failure to achieve, dependence, emotional inhibition, insufficient self-control, and others. The investigators observed statistically different scores between the two groups on 7 of the early maladaptive schemas, with the bipolar diagnosis being associated with less favorable ratings. This evidence contributes to a clinical understanding of differences between these two diagnosed mood disorders, differences with potential implications for intervention, especially with those diagnosed with bipolar disorder.

the impact of quantitative research in social work

In other studies, the groups being compared using a cross-sectional research design might also represent different age groups. The conclusions drawn from such a study could be intended to help develop or test theory concerning the developmental course of a phenomenon or changes in it over time.

A cross-sectional study example explored the potential impact of young children having been exposed to intimate partner violence (Bowen, 2017). The investigator utilized one data point (when the children were four years old) to assess the presence of conduct disorder symptoms among children whose mothers reported their partner having been emotionally cruel and/or having physically hurt them during the child’s first 33 months. Among 7,743 children, the mothers of 18.4% of girls and 17% of boys reported experiencing a measured form of intimate partner violence during that developmental period. Conduct problems were significantly more common among these children compared to children whose mothers reported no exposure to intimate partner violence: 2.12 times more common among exposed boys and 2.85 times more common among exposed girls. The source of data for this study was longitudinal: one cohort of families participated in data collection at multiple time points during the course of the child’s development; however, the study results reported by Bowen (2017) were based on one time point alone, making it a cross-sectional study.

The cross-sectional study presents a single snapshot description of the group’s aggregated ratings. A longitudinal study design is more sensitive to fluctuations over time and to variation patterns experienced by individual couples, presenting a more video-like description. When the effect of time (or age) is of interest, conclusions from cross-sectional studies must be drawn with caution since many alternative explanations for any observed group differences exist, most of which have little to do with the passage of time or developmental changes. That is where longitudinal designs are more helpful.

Longitudinal Research Design.  A longitudinal study involves data collection from the same participants at two or more points in time. The beauty of this approach is that it allows investigators to directly address questions involving change over time. Longitudinal studies are a common means of studying or evaluating social work interventions, for example. This design strategy is also common in studies about human developmental processes. For example, a longitudinal study design was used to study behavior and cognitive functioning in 189 young adolescents who had experienced prenatal cocaine exposure and 183 who had not (Minnes et al., 2016). Data were collected when these individuals were 12 years old, and again when they were aged 15. At age 12, girls with prenatal cocaine exposure had more problems regulating their behavior and with metacognition than did girls without prenatal cocaine exposure; there was no observed difference between the groups for boys. At this point, the study conclusions could have been derived from a cross-sectional study comparing the two groups at age 12 alone. However, the investigators also reported that both behavioral and metacognition problems improved from age 12 to 15 for the prenatally cocaine exposed girls at a greater rate than the other groups. While some differences did exist at age 15, it is important to note that the conclusions that might have been drawn from the cross-sectional element of the study (girls at age 12) are different from conclusions drawn from the longitudinal report comparing ages 12 and 15.

The group of individuals being followed in a longitudinal study are called a study cohort . In the above example, the cohort was a group of babies born 12 years before the analyzed data were first collected. This group continued to be followed at age 15. In addition to studying a birth cohort like this, investigators might study other than birth cohorts. For example, an incoming class of BSW students might be followed each semester until graduation from the program. Or, investigators might longitudinally follow a cohort of veterans and their families as they return from deployment at about the same time. Or, investigators might follow a group of survivors of a traumatic event such as the 9/11 collapse of the twin towers in New York City or the Washington, DC attack on the Pentagon. The concept of “cohort” indicates that the study participants share a common experience or event.

            Comparing Cross-Sectional and Longitudinal Designs. These two study designs result in somewhat different perspectives even if the same phenomenon is being examined; much as an orange sliced across its equator looks different from one sliced from top to bottom.

the impact of quantitative research in social work

Each design approach is associated with advantages and disadvantages that come under consideration by investigators as they plan their studies. Let’s take a closer look at what these considerations might include.

Individual variation.  Longitudinal studies, compared to cross-sectional studies, are more sensitive to the unique, diverse patterns of behavior observed for individuals over time. In cross-sectional studies, individual differences are blended together for each group being compared. In longitudinal studies, behavior at one point in time is maintained within the context of behavior at prior and later points in time. This provides useful information about how individuals change.

Adaptability. One limitation of longitudinal studies is that investigators are “stuck” with continuing to use the same (or very similar) measurement approach throughout the life of a study. Measurement consistency ensures that differences at different time points are a function of time, rather than different measurement approaches. Unfortunately, the science of measurement may progress over the course of a longitudinal study so that by the end of the study, a better way to measure the variables of interest may have emerged. A cross-sectional study can capitalize on the best, latest measurement strategy available at the time the study is conducted. The longitudinal study must continue to use what was the best, latest measurement strategy at the start of the study, regardless of progress in measurement science that has occurred by the end of the study period.

Concurrent versus retrospective measurement. If investigators are addressing questions about change, data collected concurrently at each point in time may be more reliable than data collected at one point asking about previous points in time. This latter retrospective data may be contaminated by memory or other sources of bias, making inferences about change less accurate.

Time. Imagine that investigators wanted to know how driving after drinking excessive amount of alcohol might change as people age. A study conducted in 2008 demonstrated that the percent of persons who did so during the prior 12 months was highest among persons aged 16-20 years (57%), and declined with each subsequent age group to those aged 65 and older (see Figure 3-1). This cross-sectional study, covering an age span of more than 50 years was conducted in about four months. If the same question were to be studied longitudinally, following the 16-year-old participants until they were aged 65, and asking the question at intervals along the way, it would take every bit of those 50 years to complete. Sometimes longitudinal studies are simply impractical to undertake because of the time involved.

Figure 3-1. Percent driving during past 12 months when thought to be over the legal limit for alcohol consumption (N=1466 drivers who also consume alcohol, adapted from NHTSA, 2010).

the impact of quantitative research in social work

Attrition . Investigators conducting cross-sectional studies, because they only involve one measurement point for each study participant, are not concerned with losing participants over time. Participant attrition (drop-out) from longitudinal studies is a very serious concern, and a great deal of effort needs to be directed toward retaining each participant longitudinally over time. Participants disappear for a number of reasons: losing interest in study participation, the condition or phenomenon being studied changes, moving out of the area, becoming incarcerated or otherwise being lost to study continuation/follow-up. Furthermore, there exists some degree of investigator turn-over/attrition possible, as well.

Cost.  Longitudinal studies may be more costly than cross-sectional studies to implement because of the need to maintain participants’ involvement over time (retention).

Statistical analyses.  Different types of analyses need to be conducted on data that are longitudinal compared to cross-sectional data. This is neither an advantage nor disadvantage, just a difference worthy of note.

Descriptive, Correlational, and Experimental Research

Back in Module 2, you were introduced to research questions of an exploratory, descriptive, or explanatory nature. Here we examine the traditions in quantitative research that help address these different types of research questions: descriptive, correlational, and experimental studies.  

Descriptive Research. The aim of a descriptive research  study is either to create a profile or typology of a population, or to describe a phenomenon or naturally occurring process (Grinnell & Unrau, 2014; Yegidis, Weinbach, & Meyers, 2018). Descriptive research contributes important information for understanding diverse populations, social work problems, and social phenomena. An example of descriptive research was a study aimed at identifying different substance use patterns among pregnant adolescents (Salas-Wright, Vaughn, & Ugalde, 2016). The study investigators started with an understanding from previous research that there exists an overall pattern of elevated substance use levels prior to conception compared to non-pregnant peers, and of reduced use during pregnancy. However, these investigators were concerned that this general, aggregate picture of pregnant adolescents might mask different patterns (heterogeneity) having important social work practice implications. The team analyzed substance use data for 810 pregnant adolescents, aged 12-17. They were able to discern and describe four different pattern types: abstaining from any substance use, using only alcohol, using both alcohol and cannabis, and polydrug use. Those who only drank alcohol prior to conception tended to cut back or eliminate drinking during pregnancy. Those who used alcohol and cannabis or engaged in polydrug use were fewer in number than those who only used alcohol, but also were less likely to stop using substances during pregnancy. Those who engaged in polydrug use were the youngest group and most likely to meet criteria for a substance use disorder. The different substance use types were not evenly distributed by demographics (ethnicity, age, family income) or pregnancy stage across study participants. The results of this descriptive study have social work and public health implications related to intervening in different ways with adolescents of different types to prevent or reduce prenatal exposure to substances.

the impact of quantitative research in social work

Correlational Research. The aim in some studies is to evaluate the existence and nature ofrelationships that might exist between variables—looking to see if variable x and variable y are associated or correlated with each other. These are called correlational research  studies. As mentioned in reference to descriptive studies, the investigator does not manipulate variables experimentally: naturally occurring relationships among variables is the focus of study. An interesting approach to understanding social work problems and social phenomena involves the use of geospatial data. You may have heard the term GIS—short for geographic information system—which is about data that have a geographical, spatial component. This type of data is useful for social workers to understand how a problem or phenomenon might be experienced in the environmental contexts where people live, work, recreate, or otherwise function.

A correlational study using geospatial data examined relationships between use of and physical access to marijuana with child maltreatment (Freisthler, Gruenewald, & Price Wolf, 2015). First, the investigators demonstrated that parental use of marijuana was positively correlated with the frequency of child physical abuse, negatively correlated with physical neglect, and not correlated with supervisory neglect. Parents reporting marijuana use reported engaging in physical abuse three times more frequently than parents who did not use marijuana, with boys more likely than girls to be physically abused, and older children more likely to be physically abused than younger children. The data for these results came from telephone interviews with 3,023 persons in a state with “legalized” medical marijuana dispensaries and delivery services. The geospatial data came into play in analyses indicating that greater geographical density of medical marijuana dispensaries and delivery services was positively related to greater frequency of physical child abuse. These findings have important implications for social work practice and social policy, including that marijuana use may be problematic even if a marijuana use disorder is not diagnosed in the child welfare context, and that there may exist differences in the impact of marijuana use in communities where distribution is legitimized, relegated to illicit “street-level dealers,” or not present.

the impact of quantitative research in social work

Correlation versus Causality . The major limitation of correlational research is that the tradition is unable to definitively determine causality in observed relationships between variables of interest. When a correlation is observed between two variables, x and y , at least four possibilities exist:

the impact of quantitative research in social work

To logically draw conclusions about causality, at least three conditions need to be met.

1. Evidence exists that the two variables are related—as values for the “cause” change, values for the “effect” also change systematically in relation to those changes in the “cause.” This is what the correlation statistic can demonstrate: changes in x relate to changes in y . In addition, that the two are related must be consistently demonstrated—that changing the “cause” consistently (not arbitrarily) results in “effect” changes (Cozby, 2007). Consider the example presented earlier about exposure to gun violence and perception of community efficacy (Riley et al., 2017). The investigators observed that as ratings of exposure to gun violence increased (the x or hypothesized “cause” variable), ratings of perceived collective efficacy decreased (the y or hypothesized “effect” variable). This kind of relationship between the variables is called a negative or inverse correlation—as one goes up the other goes down. Depending on the actual values of the relationship, it might look something like Figure 3-2 where more exposure to gun violence (moving from left to right) is associated with lower community efficacy.

Figure 3-2. Depiction of negative (inverse) correlation using hypothetical values

the impact of quantitative research in social work

2. The “cause” and “effect” must be sequenced in time, with the “cause” preceding (coming before) the “effect.”For example, consider the earlier-mentioned Bowen (2017) study concerning conduct disorder among four-year-old children who have been exposed to a mother’s experience of intimate partner violence. Before concluding that the exposure caused the conduct disorder symptoms, it is important to demonstrate that the children’s conduct disorder symptoms did not predate the exposure to intimate partner violence. If the conduct disorder came first, it is quite possible that the stress of managing a child with these behaviors influenced the intimate partner violence ( y  caused x ) instead.

3. There is not another, third factor that “causes” both other factors. A demonstrative example comes from a purposefully absurd graph attributed on the internet to Bobby Henderson, used to demonstrate that correlation does not imply causation. The erroneous conclusion drawn from the data is that global warming ( y  variable) is a direct effect of the shrinking number of pirates (variable x ). As you can see from Figure 3-3, between the years 1860 and 2000 the number of Caribbean pirates declined from 45,000 to 17, and in the same period, the average global temperature climbed from 14.25 degrees Celsius to 16 degrees Celsius (“data” retrieved from https://en.wikipedia.org/wiki/Flying_Spaghetti_Monster ).

the impact of quantitative research in social work

While the “data” indicate an inverse association between the two variables (notice that the approximate number of pirates is declining as year is increasing on the x  axis), it is absurd to conclude that the reduction in pirate numbers caused global warming. A third variable might be causing both: for example, the mechanism of change in both might be related to the global spread of industrialization.

In summary, correlational research studies allow investigators to identify possible associations between factors, helping to develop better understanding about diverse populations, social work problems, and social phenomena. However, additional research is necessary to determine if there are causal relationships involved. This is important if we are to use evidence to inform the interventions we develop and test. If we do not understand the causal nature of the relationships and the mechanisms by which observed effects are caused, then we cannot determine the best places and ways to intervene. This understanding is enhanced through experimental research.

Experimental Research

The aim of experimental research  is to answer explanatory questions: to test hypotheses or theory about phenomena and processes. This includes testing theory and social work interventions. Our next course, SWK 3402, emphasizes research and evaluation to understand interventions. Here we focus on experimental research to understand diverse populations, social work problems, and social phenomena. Experiments are designed to help develop our understanding of causal relationships between factors, the etiology of social work problems and social phenomena, and the mechanisms by which change might occur. Investigators develop and discover this kind of information through systematic manipulation of specific factors and observing the impact of those manipulations on the outcomes of interest. The study designs and methods applied in experimental research are selected to eliminate as many alternative explanations for the results as possible, thereby increasing confidence that the tested mechanism or explanation is accurately accepted: “Explanatory research depends on our ability to rule out other explanations for our findings” (Engel & Schutt, 2013, p. 19). This observation leads naturally to a discussion of internal validit y  and external validity  in experimental research.

Internal Validity . The concept of internal validity refers to the degree of confidence that can be applied to the results of an experimental study. Remember, the purpose of an experiment is to develop an understanding of causal relationships between variables—in other words, to test explanatory hypotheses. Strong internal validity means greater confidence in drawing causal or explanatory conclusions from the data; this is sometimes referred to as study integrity. Experimental design is sometimes referred to as “the gold standard” against which all others are compared. This, however, would only be true if the aim of those being compared is explanatory—many other designs are important, depending on the nature of the research questions and aims. If, however, the aim is explanatory, designing a study that allows for investigator control over the greatest number of possible explanatory factors, ruling out alternative explanations for the observed experimental results, is most desirable—it means the study has strong internal validity.

External Validity.  External validity is about the degree to which a study’s findings can be generalized to the population represented by that sample. External validity is again about confidence in the results, but this time it is about how well the results from a particular study represent a general reality for other similar people and other similar settings. For example, on attitudes about social issues, studies conducted with college student samples from private universities may poorly represent the nation’s population of college-aged adults, or even the population of the nation’s college students. Similarly, studies about a social work problem conducted with people in treatment or receiving services might not be representative of people experiencing the same problem but who are not in treatment and may never have sought treatment for the problem. This is called the clinical sample problem related to external validity concerns. As you will see in Chapter 6, external validity is powerfully influenced by the procedures involved in developing and retaining a study sample, including (but not limited to) problems of small sample size.

The Monitoring the Future study represent an example of strong cross-sectional survey methodology, contributing to the study’s high external validity. The Monitoring the Future study has been conducted with 12thgrade students every year since 1975, and with 8th, 10th, and 12thgraders since 1991—you may very well have participated in the study at some point in your life. Some young adults are being surveyed in more recent years, as well. The purpose is to study trends in U.S. adolescents’ beliefs, attitudes, and behavior. To enhance generalizability of the annual findings, about 50,000 students are surveyed each year. These students are from about 420 public and private high schools across the nation, and include about 18,000 8thgraders from about 150 schools, about 17,000 10thgraders, and about 16,000 12thgraders from about 133 schools. A randomly selected subset of participants has been followed longitudinally every two years since 12thgrade, as well. The geographic regions are randomly selected each year, with random selection of the schools in the selected areas, and random selection of the classes to be included within each school. Random selection is a strong external validity contributor. You may be interested to see some of the trend results reported by the study investigators, located at http://www.monitoringthefuture.org/ .

the impact of quantitative research in social work

If you have enjoyed television shows like Bill Nye the Science Guy and Mythbusters , then you have witnessed systematic experiments being implemented. Their demonstrations typically include elements of scientific process designed to enhance rigor of the experiments under way. A great deal of social science, behavioral science, and social work research follows scientific methods and logic developed in physical and natural sciences. You might wonder why all social work research is not conducted under these scientific, experimental procedures. The simple answer is that a great deal of social work research is designed to address other types of questions: experimental research is designed to answer explanatory questions, not the other types that we have discussed.

the impact of quantitative research in social work

A more complex answer includes that many possible experiments cannot be practically or ethically conducted. For example, we cannot ethically expose people to conditions that might cause them harm. To understand the impact of disrupted parent-child attachment bonds we cannot experimentally manipulate this condition to see what happens; instead, we need to rely on observing the effects through what happened “naturally,” such as when policy led to children and immigrant/refugee parents being separated at U.S. borders while parents were detained for having entered the country illegally. Similarly, we could not ethically conduct controlled experiments where children are randomly assigned to being raised by parents either with or without substance misuse problems,  where families are randomly assigned to living in homes with or without lead contamination, or where communities are randomly assigned to having or not having access to affordable, healthy food. Abuse and ethical violations in experiments conducted with unknowing, under-informed, unempowered, and otherwise vulnerable persons led to the development of policies, guidelines, and practices discussed in your CITI training about research involving human subjects—including the Belmont Report and Institutional Review Boards.

Stop and Think

Take a moment to complete the following activity.

Chapter Summary

This chapter introduced the traditions of quantitative research. You read about the distinctions, advantages, and disadvantages of cross-sectional and longitudinal study designs. You also read about descriptive, correlational, and experimental studies, and were reminded that correlation does not imply causation. Finally, the topics of internal and external validity were examined. The next chapter further develops a foundation for understanding the wide range of available options and decisions to be made by investigators planning experimental research. The focus includes understanding the nature of variables involved in quantitative studies, whether they are descriptive, exploratory, or experimental in nature.

Social Work 3401 Coursebook Copyright © by Dr. Audrey Begun is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License , except where otherwise noted.

Share This Book

S371 Social Work Research - Jill Chonody: What is Quantitative Research?

  • Choosing a Topic
  • Choosing Search Terms
  • What is Quantitative Research?
  • Requesting Materials

Quantitative Research in the Social Sciences

This page is courtesy of University of Southern California: http://libguides.usc.edu/content.php?pid=83009&sid=615867

Quantitative methods emphasize objective measurements and the statistical, mathematical, or numerical analysis of data collected through polls, questionnaires, and surveys, or by manipulating pre-existing statistical data using computational techniques . Quantitative research focuses on gathering numerical data and generalizing it across groups of people or to explain a particular phenomenon.

Babbie, Earl R. The Practice of Social Research . 12th ed. Belmont, CA: Wadsworth Cengage, 2010; Muijs, Daniel. Doing Quantitative Research in Education with SPSS . 2nd edition. London: SAGE Publications, 2010.

Characteristics of Quantitative Research

Your goal in conducting quantitative research study is to determine the relationship between one thing [an independent variable] and another [a dependent or outcome variable] within a population. Quantitative research designs are either descriptive [subjects usually measured once] or experimental [subjects measured before and after a treatment]. A descriptive study establishes only associations between variables; an experimental study establishes causality.

Quantitative research deals in numbers, logic, and an objective stance. Quantitative research focuses on numberic and unchanging data and detailed, convergent reasoning rather than divergent reasoning [i.e., the generation of a variety of ideas about a research problem in a spontaneous, free-flowing manner].

Its main characteristics are :

  • The data is usually gathered using structured research instruments.
  • The results are based on larger sample sizes that are representative of the population.
  • The research study can usually be replicated or repeated, given its high reliability.
  • Researcher has a clearly defined research question to which objective answers are sought.
  • All aspects of the study are carefully designed before data is collected.
  • Data are in the form of numbers and statistics, often arranged in tables, charts, figures, or other non-textual forms.
  • Project can be used to generalize concepts more widely, predict future results, or investigate causal relationships.
  • Researcher uses tools, such as questionnaires or computer software, to collect numerical data.

The overarching aim of a quantitative research study is to classify features, count them, and construct statistical models in an attempt to explain what is observed.

  Things to keep in mind when reporting the results of a study using quantiative methods :

  • Explain the data collected and their statistical treatment as well as all relevant results in relation to the research problem you are investigating. Interpretation of results is not appropriate in this section.
  • Report unanticipated events that occurred during your data collection. Explain how the actual analysis differs from the planned analysis. Explain your handling of missing data and why any missing data does not undermine the validity of your analysis.
  • Explain the techniques you used to "clean" your data set.
  • Choose a minimally sufficient statistical procedure ; provide a rationale for its use and a reference for it. Specify any computer programs used.
  • Describe the assumptions for each procedure and the steps you took to ensure that they were not violated.
  • When using inferential statistics , provide the descriptive statistics, confidence intervals, and sample sizes for each variable as well as the value of the test statistic, its direction, the degrees of freedom, and the significance level [report the actual p value].
  • Avoid inferring causality , particularly in nonrandomized designs or without further experimentation.
  • Use tables to provide exact values ; use figures to convey global effects. Keep figures small in size; include graphic representations of confidence intervals whenever possible.
  • Always tell the reader what to look for in tables and figures .

NOTE:   When using pre-existing statistical data gathered and made available by anyone other than yourself [e.g., government agency], you still must report on the methods that were used to gather the data and describe any missing data that exists and, if there is any, provide a clear explanation why the missing datat does not undermine the validity of your final analysis.

Babbie, Earl R. The Practice of Social Research . 12th ed. Belmont, CA: Wadsworth Cengage, 2010; Brians, Craig Leonard et al. Empirical Political Analysis: Quantitative and Qualitative Research Methods . 8th ed. Boston, MA: Longman, 2011; McNabb, David E. Research Methods in Public Administration and Nonprofit Management: Quantitative and Qualitative Approaches . 2nd ed. Armonk, NY: M.E. Sharpe, 2008; Quantitative Research Methods . Writing@CSU. Colorado State University; Singh, Kultar. Quantitative Social Research Methods . Los Angeles, CA: Sage, 2007.

Basic Research Designs for Quantitative Studies

Before designing a quantitative research study, you must decide whether it will be descriptive or experimental because this will dictate how you gather, analyze, and interpret the results. A descriptive study is governed by the following rules: subjects are generally measured once; the intention is to only establish associations between variables; and, the study may include a sample population of hundreds or thousands of subjects to ensure that a valid estimate of a generalized relationship between variables has been obtained. An experimental design includes subjects measured before and after a particular treatment, the sample population may be very small and purposefully chosen, and it is intended to establish causality between variables. Introduction The introduction to a quantitative study is usually written in the present tense and from the third person point of view. It covers the following information:

  • Identifies the research problem -- as with any academic study, you must state clearly and concisely the research problem being investigated.
  • Reviews the literature -- review scholarship on the topic, synthesizing key themes and, if necessary, noting studies that have used similar methods of inquiry and analysis. Note where key gaps exist and how your study helps to fill these gaps or clarifies existing knowledge.
  • Describes the theoretical framework -- provide an outline of the theory or hypothesis underpinning your study. If necessary, define unfamiliar or complex terms, concepts, or ideas and provide the appropriate background information to place the research problem in proper context [e.g., historical, cultural, economic, etc.].

Methodology The methods section of a quantitative study should describe how each objective of your study will be achieved. Be sure to provide enough detail to enable the reader can make an informed assessment of the methods being used to obtain results associated with the research problem. The methods section should be presented in the past tense.

  • Study population and sampling -- where did the data come from; how robust is it; note where gaps exist or what was excluded. Note the procedures used for their selection;
  • Data collection – describe the tools and methods used to collect information and identify the variables being measured; describe the methods used to obtain the data; and, note if the data was pre-existing [i.e., government data] or you gathered it yourself. If you gathered it yourself, describe what type of instrument you used and why. Note that no data set is perfect--describe any limitations in methods of gathering data.
  • Data analysis -- describe the procedures for processing and analyzing the data. If appropriate, describe the specific instruments of analysis used to study each research objective, including mathematical techniques and the type of computer software used to manipulate the data.

Results The finding of your study should be written objectively and in a succinct and precise format. In quantitative studies, it is common to use graphs, tables, charts, and other non-textual elements to help the reader understand the data. Make sure that non-textual elements do not stand in isolation from the text but are being used to supplement the overall description of the results and to help clarify key points being made. Further information about how to effectively present data using charts and graphs can be found here .

  • Statistical analysis -- how did you analyze the data? What were the key findings from the data? The findings should be present in a logical, sequential order. Describe but do not interpret these trends or negative results; save that for the discussion section. The results should be presented in the past tense.

Discussion Discussions should be analytic, logical, and comprehensive. The discussion should meld together your findings in relation to those identified in the literature review, and placed within the context of the theoretical framework underpinning the study. The discussion should be presented in the present tense.

  • Interpretation of results -- reiterate the research problem being investigated and compare and contrast the findings with the research questions underlying the study. Did they affirm predicted outcomes or did the data refute it?
  • Description of trends, comparison of groups, or relationships among variables -- describe any trends that emerged from your analysis and explain all unanticipated and statistical insignificant findings.
  • Discussion of implications – what is the meaning of your results? Highlight key findings based on the overall results and note findings that you believe are important. How have the results helped fill gaps in understanding the research problem?
  • Limitations -- describe any limitations or unavoidable bias in your study and, if necessary, note why these limitations did not inhibit effective interpretation of the results.

Conclusion End your study by to summarizing the topic and provide a final comment and assessment of the study.

  • Summary of findings – synthesize the answers to your research questions. Do not report any statistical data here; just provide a narrative summary of the key findings and describe what was learned that you did not know before conducting the study.
  • Recommendations – if appropriate to the aim of the assignment, tie key findings with policy recommendations or actions to be taken in practice.
  • Future research – note the need for future research linked to your study’s limitations or to any remaining gaps in the literature that were not addressed in your study.

Black, Thomas R. Doing Quantitative Research in the Social Sciences: An Integrated Approach to Research Design, Measurement and Statistics . London: Sage, 1999; Gay,L. R. and Peter Airasain. Educational Research: Competencies for Analysis and Applications . 7th edition. Upper Saddle River, NJ: Merril Prentice Hall, 2003; Hector, Anestine.  An Overview of Quantitative Research in Compostion and TESOL . Department of English, Indiana University of Pennsylvania; Hopkins, Will G. “Quantitative Research Design.” Sportscience 4, 1 (2000); A Strategy for Writing Up Research Results . The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College; Nenty, H. Johnson. "Writing a Quantitative Research Thesis." International Journal of Educational Science 1 (2009): 19-32; Ouyang, Ronghua (John). Basic Inquiry of Quantitative Research . Kennesaw State University.

  • << Previous: Finding Quantitative Research
  • Next: Databases >>
  • Last Updated: Jul 11, 2023 1:03 PM
  • URL: https://libguides.iun.edu/S371socialworkresearch

Logo for Open Oregon Educational Resources

18 10. Quantitative sampling

Chapter outline.

  • The sampling process (25 minute read)
  • Sampling approaches for quantitative research (15 minute read)
  • Sample quality (24 minute read)

Content warning: examples contain references to addiction to technology, domestic violence and batterer intervention, cancer, illegal drug use, LGBTQ+ discrimination, binge drinking, intimate partner violence among college students, child abuse, neocolonialism and Western hegemony.

10.1 The sampling process

Learning objectives.

Learners will be able to…

  • Decide where to get your data and who you might need to talk to
  • Evaluate whether it is feasible for you to collect first-hand data from your target population
  • Describe the process of sampling
  • Apply population, sampling frame, and other sampling terminology to sampling people your project’s target population

One of the things that surprised me most as a research methods professor is how much my students struggle with understanding sampling. It is surprising because people engage in sampling all the time. How do you learn whether you like a particular food, like BBQ ribs? You sample them from different restaurants! Obviously, social scientists put a bit more effort and thought into the process than that, but the underlying logic is the same. By sampling a small group of BBQ ribs from different restaurants and liking most of them, you can conclude that when you encounter BBQ ribs again, you will probably like them. You don’t need to eat all of the BBQ ribs in the world to come to that conclusion, just a small sample. [1] Part of the difficulty my students face is learning sampling terminology, which is the focus of this section.

the impact of quantitative research in social work

Who is your study about and who should you talk to?

At this point in the research process, you know what your research question is. Our goal in this chapter is to help you understand how to find the people (or documents) you need to study in order to find the answer to your research question. It may be helpful at this point to distinguish between two concepts. Your unit of analysis is the entity that you wish to be able to say something about at the end of your study (probably what you’d consider to be the main focus of your study). Your unit of observation is the entity (or entities) that you actually observe, measure, or collect in the course of trying to learn something about your unit of analysis.

It is often the case that your unit of analysis and unit of observation are the same. For example, we may want to say something about social work students (unit of analysis), so we ask social work students at our university to complete a survey for our study (unit of observation). In this case, we are observing individuals , i.e. students, so we can make conclusions about individual s .

On the other hand, our unit of analysis and observation can differ. We could sample social work students to draw conclusions about organizations or universities. Perhaps we are comparing students at historically Black colleges and universities (HBCUs) and primarily white institutions (PWIs). Even though our sample was made up of individual students from various colleges (our unit of observation), our unit of analysis was the university as an organization. Conclusions we made from individual-level data were used to understand larger organizations.

Similarly, we could adjust our sampling approach to target specific student cohorts. Perhaps we wanted to understand the experiences of Black social work students in PWIs. We could choose either an individual unit of observation by selecting students, or a group unit of observation by studying the National Association of Black Social Workers .

Sometimes the units of analysis and observation differ due to pragmatic reasons. If we wanted to study whether being a social work student impacted family relationships, we may choose to study family members of students in social work programs who could give us information about how they behaved in the home. In this case, we would be observing family members to draw conclusions about individual students.

In sum, there are many potential units of analysis that a social worker might examine, but some of the most common include i ndividuals, groups, and organizations. Table 10.1 details examples identifying the units of observation and analysis in a hypothetical study of student addiction to electronic gadgets.

First-hand vs. second-hand knowledge

Your unit of analysis will be determined by your research question. Specifically, it should relate to your target population. Your unit of observation, on the other hand, is determined largely by the method of data collection you use to answer that research question. Let’s consider a common issue in social work research: understanding the effectiveness of different social work interventions. Who has first-hand knowledge and who has second-hand knowledge? Well, practitioners would have first-hand knowledge about implementing the intervention. For example, they might discuss with you the unique language they use help clients understand the intervention. Clients, on the other hand, have first-hand knowledge about the impact of those interventions on their lives. If you want to know if an intervention is effective, you need to ask people who have received it!

Unfortunately, student projects run into pragmatic limitations with sampling from client groups. Clients are often diagnosed with severe mental health issues or have other ongoing issues that render them a vulnerable population at greater risk of harm. Asking a person who was recently experiencing suicidal ideation about that experience may interfere with ongoing treatment. Client records are also confidential and cannot be shared with researchers unless clients give explicit permission. Asking one’s own clients to participate in the study creates a dual relationship with the client, as both clinician and researcher, and dual relationship have conflicting responsibilities and boundaries.

Obviously, studies are done with social work clients all the time. But for student projects in the classroom, it is often required to get second-hand information from a population that is less vulnerable. Students may instead choose to study clinicians and how they perceive the effectiveness of different interventions. While clinicians can provide an informed perspective, they have less knowledge about personally receiving the intervention. In general, researchers prefer to sample the people who have first-hand knowledge about their topic, though feasibility often forces them to analyze second-hand information instead.

Population: Who do you want to study?

In social scientific research, a  population   is the cluster of people you are most interested in. It is often the “who” that you want to be able to say something about at the end of your study. While populations in research may be rather large, such as “the American people,” they are more typically more specific than that. For example, a large study for which the population of interest is the American people will likely specify which American people, such as adults over the age of 18 or citizens or legal permanent residents. Based on your work in Chapter 2 , you should have a target population identified in your working question. That might be something like “people with developmental disabilities” or “students in a social work program.”

It is almost impossible for a researcher to gather data from their entire population of interest. This might sound surprising or disappointing until you think about the kinds of research questions that social workers typically ask. For example, let’s say we wish to answer the following question: “How does gender impact attendance in a batterer intervention program?” Would you expect to be able to collect data from all people in batterer intervention programs across all nations from all historical time periods? Unless you plan to make answering this research question your entire life’s work (and then some), I’m guessing your answer is a resounding no. So, what to do? Does not having the time or resources to gather data from every single person of interest mean having to give up your research interest?

Let’s think about who could possibly be in your study.

  • What is your population, the people you want to make conclusions about?
  • Do your unit of analysis and unit of observation differ or are they the same?
  • Can you ethically and practically get first-hand information from the people most knowledgeable about the topic, or will you rely on second-hand information from less vulnerable populations?

Setting: Where will you go to get your data?

While you can’t gather data from everyone, you can find some people from your target population to study. The first rule of sampling is: go where your participants are. You will need to figure out where you will go to get your data. For many student researchers, it is their agency, their peers, their family and friends, or whoever comes across students’ social media posts or emails asking people to participate in their study.

Each setting (agency, social media) limits your reach to only a small segment of your target population who has the opportunity to be a part of your study. This intermediate point between the overall population and the sample of people who actually participate in the researcher’s study is called a sampling frame . A sampling frame is a list of people from which you will draw your sample.

But where do you find a sampling frame? Answering this question is the first step in conducting human subjects research. Social work researchers must think about locations or groups in which your target population gathers or interacts. For example, a study on quality of care in nursing homes may choose a local nursing home because it’s easy to access. The sampling frame could be all of the residents of the nursing home. You would select your participants for your study from the list of residents. Note that this is a real list. That is, an administrator at the nursing home would give you a list with every resident’s name or ID number from which you would select your participants. If you decided to include more nursing homes in your study, then your sampling frame could be all the residents at all the nursing homes who agreed to participate in your study.

Let’s consider some more examples. Unlike nursing home patients, cancer survivors do not live in an enclosed location and may no longer receive treatment at a hospital or clinic. For social work researchers to reach participants, they may consider partnering with a support group that services this population. Perhaps there is a support group at a local church survivors may attend. Without a set list of people, your sampling frame would simply be the people who showed up to the support group on the nights you were there. Similarly, if you posted an advertisement in an online peer-support group for people with cancer, your sampling frame is the people in that group.

More challenging still is recruiting people who are homeless, those with very low income, or those who belong to stigmatized groups. For example, a research study by Johnson and Johnson (2014) [2] attempted to learn usage patterns of “bath salts,” or synthetic stimulants that are marketed as “legal highs.” Users of “bath salts” don’t often gather for meetings, and reaching out to individual treatment centers is unlikely to produce enough participants for a study, as the use of bath salts is rare. To reach participants, these researchers ingeniously used online discussion boards in which users of these drugs communicate. Their sampling frame included everyone who participated in the online discussion boards during the time they collected data. Another example might include using a flyer to let people know about your study, in which case your sampling frame would be anyone who walks past your flyer wherever you hang it—usually in a strategic location where you know your population will be.

In conclusion, sampling frames can be a real list of people like the list of faculty and their ID numbers in a university department, which allows you to clearly identify who is in your study and what chance they have of being selected. However, not all sampling frames allow you to be so specific. It is also important to remember that accessing your sampling frame must be practical and ethical, as we discussed in Chapter 2 and Chapter 6 . For studies that present risks to participants, approval from gatekeepers and the university’s institutional review board (IRB) is needed.

Criteria: What characteristics must your participants have/not have?

Your sampling frame is not just everyone in the setting you identified. For example, if you were studying MSW students who are first-generation college students, you might select your university as the setting, but not everyone in your program is a first-generation student. You need to be more specific about which characteristics or attributes individuals either must have or cannot have before they participate in the study. These are known as inclusion and exclusion criteria, respectively.

Inclusion criteria are the characteristics a person must possess in order to be included in your sample. If you were conducting a survey on LGBTQ+ discrimination at your agency, you might want to sample only clients who identify as LGBTQ+. In that case, your inclusion criteria for your sample would be that individuals have to identify as LGBTQ+.

Comparably,  exclusion criteria are characteristics that disqualify a person from being included in your sample. In the previous example, you could think of cisgenderism and heterosexuality as your exclusion criteria because no person who identifies as heterosexual or cisgender would be included in your sample. Exclusion criteria are often the mirror image of inclusion criteria. However, there may be other criteria by which we want to exclude people from our sample. For example, we may exclude clients who were recently discharged or those who have just begun to receive services.

the impact of quantitative research in social work

Recruitment: How will you ask people to participate in your study?

Once you have a location and list of people from which to select, all that is left is to reach out to your participants. Recruitment refers to the process by which the researcher informs potential participants about the study and asks them to participate in the research project. Recruitment comes in many different forms. If you have ever received a phone call asking for you to participate in a survey, someone has attempted to recruit you for their study. Perhaps you’ve seen print advertisements on buses, in student centers, or in a newspaper. I’ve received many emails that were passed around my school asking for participants, usually for a graduate student project. As we learn more about specific types of sampling, make sure your recruitment strategy makes sense with your sampling approach. For example, if you put up a flyer in the student health office to recruit student athletes for your study, you may not be targeting your recruitment efforts to settings where your target population is likely to see your recruitment materials.

Recruiting human participants

Sampling is the first time in which you will contact potential study participants. Before you start this process, it is important to make sure you have approval from your university’s institutional review board as well as any gatekeepers at the locations in which you plan to conduct your study. As we discussed in section 10.1, the first rule of sampling is to go where your participants are. If you are studying domestic violence, reach out to local shelters, advocates, or service agencies. Gatekeepers will be necessary to gain access to your participants. For example, a gatekeeper can forward your recruitment email across their employee email list. Review our discussion of gatekeepers in Chapter 2 before proceeding with contacting potential participants as part of recruitment.

Recruitment can take many forms. You may show up at a staff meeting to ask for volunteers. You may send a company-wide email. Each step of this process should be vetted by the IRB as well as other stakeholders and gatekeepers. You will also need to set reasonable expectations for how many reminders you will send to the person before moving on. Generally, it is a good idea to give people a little while to respond, though reminders are often accompanied by an increase in participation. Pragmatically, it is a good idea for you to think through each step of the recruitment process and how much time it will take to complete it.

For example, as a graduate student, I conducted a study of state-level disabilities administrators in which I was recruiting a sample of very busy people and had no financial incentives to offer them for participating in my study. It helped for my research team to bring on board a well-known agency as a research partner, allowing them to review and offer suggestions on our survey and interview questions. This collaborative process took time and had to be completed before sampling could start. Once sampling commenced, I pulled contact names from my collaborator’s database and public websites, and set a weekly schedule of email and phone contacts. I would contact the director once via email. Ten days later, I would follow up via email and by leaving a voicemail with their administrative support staff. Ten days after that, I would reach out to state administrators in a different office via email and then again via phone, if needed. The process took months to complete and required a complex Excel tracking document.

Recruitment will also expose your participants to the informed consent information you prepared. For students going through the IRB, there are templates you will have to follow in order to get your study approved. For students whose projects unfold under the supervision of their department, rather than the IRB, you should check with your professor on what the expectations are for getting participant consent. In the aforementioned study, I used our IRB’s template to create a consent form but did not include a signature line. The IRB allowed me to collect my data without a signature, as there was little risk of harm from the study. It was imperative to review consent information before completing the survey and interview with participants. Only when the participant is totally clear on the purpose, risks and benefits, confidentiality protections, and other information detailed in Chapter 6 , can you ethically move forward with including them in your sample.

Sampling available documents

As with sampling humans, sampling documents centers around the question: which documents are the most relevant to your research question, in that which will provide you first-hand knowledge. Common documents analyzed in student research projects include client files, popular media like film and music lyrics, and policies from service agencies. In a case record review, the student would create exclusion and inclusion criteria based on their research question. Once a suitable sampling frame of potential documents exists, the researcher can use probability or non-probability sampling to select which client files are ultimately analyzed.

Sampling documents must also come with consent and buy-in from stakeholders and gatekeepers. Assuming you have approval to conduct your study and access to the documents you need, the process of recruitment is much easier than in studies sampling humans. There is no informed consent process with documents, though research with confidential health or education records must be done in accordance with privacy laws such as the Health Insurance Portability and Accountability Act and the Family Educational Rights and Privacy Act . Barring any technical or policy obstacles, the gathering of documents should be easier and less time consuming than sampling humans.

Sample: Who actually participates in your study?

Once you find a sampling frame from which you can recruit your participants and decide which characteristics you will  include  and   exclude, you will recruit people using a specific sampling approach, which we will cover in Section 10.2. At the end, you’re left with the group of people you successfully recruited from your sampling frame to participate in your study, your sample . If you are a participant in a research project—answering survey questions, participating in interviews, etc.—you are part of the sample in that research project.

Visualizing sampling terms

Sampling terms can be a bit daunting at first. However, with some practice, they will become second nature. Let’s walk through an example from a research project of mine. I collected data for a research project related to how much it costs to become a licensed clinical social worker (LCSW) in each state. Becoming an LCSW is necessary to work in private clinical practice and is used by supervisors in human service organizations to sign off on clinical charts from less credentialed employees, and to provide clinical supervision. If you are interested in providing clinical services as a social worker, you should become familiar with the licensing laws in your state.

Moving from population to setting, you should consider access and consent of stakeholders and the representativeness of the setting. In moving from setting to sampling frame, keep in mind your inclusion and exclusion criteria. In moving finally to sample, keep in mind your sampling approach and recruitment strategy.

Using Figure 10.1 as a guide, my population is clearly clinical social workers, as these are the people about whom I want to draw conclusions. The next step inward would be a sampling frame. Unfortunately, there is no list of every licensed clinical social worker in the United States. I could write to each state’s social work licensing board and ask for a list of names and addresses, perhaps even using a Freedom of Information Act request if they were unwilling to share the information. That option sounds time-consuming and has a low likelihood of success. Instead, I tried to figure out a convenient setting social workers are likely to congregate. I considered setting up a booth at a National Association of Social Workers (NASW) conference and asking people to participate in my survey. Ultimately, this would prove too costly, and the people who gather at an NASW conference may not be representative of the general population of clinical social workers. I finally discovered the NASW membership email list, which is available to advertisers, including researchers advertising for research projects. While the NASW list does not contain every clinical social worker, it reaches over one hundred thousand social workers regularly through its monthly e-newsletter, a large proportion of social workers in practice, so the setting was likely to draw a representative sample. To gain access to this setting from gatekeepers, I had to provide paperwork showing my study had undergone IRB review and submit my measures for approval by the mailing list administrator.

Once I gained access from gatekeepers, my setting became the members of the NASW membership list. I decided to recruit 5,000 participants because I knew that people sometimes do not read or respond to email advertisements, and I figured maybe 20% would respond, which would give me around 1,000 responses. Figuring out my sample size was a challenge, because I had to balance the costs associated with using the NASW newsletter. As you can see on their pricing page , it would cost money to learn personal information about my potential participants, which I would need to check later in order to determine if my population was representative of the overall population of social workers. For example, I could see if my sample was comparable in race, age, gender, or state of residence to the broader population of social workers by comparing my sample with information about all social workers published by NASW. I presented my options to my external funder as:

  • I could send an email advertisement to a lot of people (5,000), but I would know very little about them and they would get only one advertisement.
  • I could send multiple advertisements to fewer people (1,000) reminding them to participate, but I would also know more about them by purchasing access to personal information.
  • I could send multiple advertisements to fewer people (2,500), but not purchase access to personal information to minimize costs.

In your project, there is no expectation you purchase access to anything, and if you plan on using email advertisements, consider places that are free to access like employee or student listservs. At the same time, you will need to consider what you can know or not know about the people who will potentially be in your study, and I could collect any personal information we wanted to check representativeness in the study itself. For this reason, we decided to go with option #1. When I sent my email recruiting participants for the study, I specified that I only wanted to hear from social workers who were either currently receiving or recently received clinical supervision for licensure—my inclusion criteria. This was important because many of the people on the NASW membership list may not be licensed or license-seeking social workers. So, my sampling frame was the email addresses on the NASW mailing list who fit the inclusion criteria for the study, which I figured would be at least a few thousand people. Unfortunately, only 150 licensed or license-seeking clinical social workers responded to my recruitment email and completed the survey. You will learn in Section 10.3 why this did not make for a very good sample.

From this example, you can see that sampling is a process. The process flows sequentially from figuring out your target population, to thinking about where to find people from your target population, to figuring out how much information you know about potential participants, and finally to selecting recruiting people from that list to be a part of your sample. Through the sampling process, you must consider where people in your target population are likely to be and how best to get their attention for your study. Sampling can be an easy process, like calling every 100th name from the phone book, or challenging, like standing every day for a few weeks in an area in which people who are homeless gather for shelter. In either case, your goal is to recruit enough people who will participate in your study so you can learn about your population.

What about sampling non-humans?

Many student projects do not involve recruiting and sampling human subjects. Instead, many research projects will sample objects like client charts, movies, or books. The same terms apply, but the process is a bit easier because there are no humans involved. If a research project involves analyzing client files, it is unlikely you will look at every client file that your agency has. You will need to figure out which client files are important to your research question. Perhaps you want to sample clients who have a diagnosis of reactive attachment disorder. You would have to create a list of all clients at your agency (setting) who have reactive attachment disorder (your inclusion criteria) then use your sampling approach (which we will discuss in the next section) to select which client files you will actually analyze for your study (your sample). Recruitment is a lot easier because, well, there’s no one to convince but your gatekeepers, the managers of your agency. However, researchers who publish chart reviews must obtain IRB permission before doing so.

Key Takeaways

  • The first rule of sampling is to go where your participants are. Think about virtual or in-person settings in which your target population gathers. Remember that you may have to engage gatekeepers and stakeholders in accessing many settings, and that you will need to assess the pragmatic challenges and ethical risks and benefits of your study.
  • Consider whether you can sample documents like agency files to answer your research question. Documents are much easier to “recruit” than people!
  • Researchers must consider which characteristics are necessary for people to have (inclusion criteria) or not have (exclusion criteria), as well as how to recruit participants into the sample.
  • Social workers can sample individuals, groups, or organizations.
  • Sometimes the unit of analysis and the unit of observation in the study differ. In student projects, this is often true as target populations may be too vulnerable to expose to research whose potential harms may outweigh the benefits.
  • One’s recruitment method has to match one’s sampling approach, as will be explained in the next chapter.

Once you have identified who may be a part of your study, the next step is to think about where those people gather. Are there in-person locations in your community or on the internet that are easily accessible. List at least one potential setting for your project. Describe for each potential setting:

  • Based on what you know right now, how representative of your population are potential participants in the setting?
  • How much information can you reasonably know about potential participants before you recruit them?
  • Are there gatekeepers and what kinds of concerns might they have?
  • Are there any stakeholders that may be beneficial to bring on board as part of your research team for the project?
  • What interests might stakeholders and gatekeepers bring to the project and would they align with your vision for the project?
  • What ethical issues might you encounter if you sampled people in this setting.

Even though you may not be 100% sure about your setting yet, let’s think about the next steps.

  • For the settings you’ve identified, how might you recruit participants?
  • Identify your inclusion criteria and exclusion criteria, and assess whether you have enough information on whether people in each setting will meet them.

10.2 Sampling approaches for quantitative research

  • Determine whether you will use probability or non-probability sampling, given the strengths and limitations of each specific sampling approach
  • Distinguish between approaches to probability sampling and detail the reasons to use each approach

Sampling in quantitative research projects is done because it is not feasible to study the whole population, and researchers hope to take what we learn about a small group of people (your sample) and apply it to a larger population. There are many ways to approach this process, and they can be grouped into two categories—probability sampling and non-probability sampling. Sampling approaches are inextricably linked with recruitment, and researchers should ensure that their proposal’s recruitment strategy matches the sampling approach.

Probability sampling approaches use a random process, usually a computer program, to select participants from the sampling frame so that everyone has an equal chance of being included. It’s important to note that random means the researcher used a process that is truly random . In a project sampling college students, standing outside of the building in which your social work department is housed and surveying everyone who walks past is not random. Because of the location, you are likely to recruit a disproportionately large number of social work students and fewer from other disciplines. Depending on the time of day, you may recruit more traditional undergraduate students, who take classes during the day, or more graduate students, who take classes in the evenings.

In this example, you are actually using non-probability sampling . Another way to say this is that you are using the most common sampling approach for student projects, availability sampling . Also called convenience sampling, this approach simply recruits people who are convenient or easily available to the researcher. If you have ever been asked by a friend to participate in their research study for their class or seen an advertisement for a study on a bulletin board or social media, you were being recruited using an availability sampling approach.

There are a number of benefits to the availability sampling approach. First and foremost, it is less costly and time-consuming for the researcher. As long as the person you are attempting to recruit has knowledge of the topic you are studying, the information you get from the sample you recruit will be relevant to your topic (although your sample may not necessarily be representative of a larger population). Availability samples can also be helpful when random sampling isn’t practical. If you are planning to survey students in an LGBTQ+ support group on campus but attendance varies from meeting to meeting, you may show up at a meeting and ask anyone present to participate in your study. A support group with varied membership makes it impossible to have a real list—or sampling frame—from which to randomly select individuals. Availability sampling would help you reach that population.

Availability sampling is appropriate for student and smaller-scale projects, but it comes with significant limitations. The purpose of sampling in quantitative research is to generalize from a small sample to a larger population. Because availability sampling does not use a random process to select participants, the researcher cannot be sure their sample is representative of the population they hope to generalize to. Instead, the recruitment processes may have been structured by other factors that may bias the sample to be different in some way than the overall population.

So, for instance, if we asked social work students about their level of satisfaction with the services at the student health center, and we sampled in the evenings, we would get most likely get a biased perspective of the issue. Students taking only night classes are much more likely to commute to school, spend less time on campus, and use fewer campus services. Our results would not represent what all social work students feel about the topic. We might get the impression that no social work student had ever visited the health center, when that is not actually true at all. Sampling bias will be discussed in detail in Section 10.3.

the impact of quantitative research in social work

Approaches to probability sampling

What might be a better strategy is getting a list of all email addresses of social work students and randomly selecting email addresses of students to whom you can send your survey. This would be an example of simple random sampling . It’s important to note that you need a real list of people in your sampling frame from which to select your email addresses. For projects where the people who could potentially participate is not known by the researcher, probability sampling is not possible. It is likely that administrators at your school’s registrar would be reluctant to share the list of students’ names and email addresses. Always remember to consider the feasibility and ethical implications of the sampling approach you choose.

Usually, simple random sampling is accomplished by assigning each person, or element , in your sampling frame a number and selecting your participants using a random number generator. You would follow an identical process if you were sampling records or documents as your elements, rather than people. True randomness is difficult to achieve, and it takes complex computational calculations to do so. Although you think you can select things at random, human-generated randomness is actually quite predictable, as it falls into patterns called heuristics . To truly randomly select elements, researchers must rely on computer-generated help. Many free websites have good pseudo-random number generators. A good example is the website Random.org , which contains a random number generator that can also randomize lists of participants. Sometimes, researchers use a table of numbers that have been generated randomly. There are several possible sources for obtaining a random number table. Some statistics and research methods textbooks provide such tables in an appendix.

Though simple, this approach to sampling can be tedious since the researcher must assign a number to each person in a sampling frame. Systematic sampling techniques are somewhat less tedious but offer the benefits of a random sample. As with simple random samples, you must possess a list of everyone in your sampling frame. Once you’ve done that, to draw a systematic sample you’d simply select every k th element on your list. But what is k , and where on the list of population elements does one begin the selection process?

Diagram showing four people being selected using systematic sampling, starting at number 2 and every third person after that (5, 8, 11)

k is your selection interval or the distance between the elements you select for inclusion in your study. To begin the selection process, you’ll need to figure out how many elements you wish to include in your sample. Let’s say you want to survey 25 social work students and there are 100 social work students on your campus. In this case, your selection interval, or  k , is 4. To get your selection interval, simply divide the total number of population elements by your desired sample size. Systematic sampling starts by randomly selecting a number between 1 and  k  to start from, and then recruiting every  kth person. In our example, we may start at number 3 and then select the 7th, 11th, 15th (and so forth) person on our list of email addresses. In Figure 10.2, you can see the researcher starts at number 2 and then selects every third person for inclusion in the sample.

There is one clear instance in which systematic sampling should not be employed. If your sampling frame has any pattern to it, you could inadvertently introduce bias into your sample by using a systemic sampling strategy. (Bias will be discussed in more depth in section 10.3.) This is sometimes referred to as the problem of periodicity. Periodicity refers to the tendency for a pattern to occur at regular intervals.

To stray a bit from our example, imagine we were sampling client charts based on the date they entered a health center and recording the reason for their visit. We may expect more admissions for issues related to alcohol consumption on the weekend than we would during the week. The periodicity of alcohol intoxication may bias our sample towards either overrepresenting or underrepresenting this issue, depending on our sampling interval and whether we collected data on a weekday or weekend.

Advanced probability sampling techniques

Returning again to our idea of sampling student email addresses, one of the challenges in our study will be the different types of students. If we are interested in all social work students, it may be helpful to divide our sampling frame, or list of students, into three lists—one for traditional, full-time undergraduate students, another for part-time undergraduate students, and one more for full-time graduate students—and then randomly select from these lists. This is particularly important if we wanted to make sure our sample had the same proportion of each type of student compared with the general population.

This approach is called stratified random sampling . In stratified random sampling, a researcher will divide the study population into relevant subgroups or strata and then draw a sample from each subgroup, or stratum. Strata is the plural of stratum, so it refers to all of the groups while stratum refers to each group. This can be used to make sure your sample has the same proportion of people from each stratum. If, for example, our sample had many more graduate students than undergraduate students, we may draw incorrect conclusions that do not represent what all social work students experience.

Selecting a proportion of black, grey, and white students from a population into a sample

Generally, the goal of stratified random sampling is to recruit a sample that makes sure all elements of the population are included sufficiently that conclusions can be drawn about them. Usually, the purpose is to create a sample that is identical to the overall population along whatever strata you’ve identified. In our sample, it would be graduate and undergraduate students. Stratified random sampling is also useful when a subgroup of interest makes up a relatively small proportion of the overall sample. For example, if your social work program contained relatively few Asian students but you wanted to make sure you recruited enough Asian students to conduct statistical analysis, you could use race to divide people into subgroups or strata and then disproportionately sample from the Asian students to make sure enough of them were in your sample to draw meaningful conclusions. Statistical tests may have a minimum number

Up to this point in our discussion of probability samples, we’ve assumed that researchers will be able to access a list of population elements in order to create a sampling frame. This, as you might imagine, is not always the case. Let’s say, for example, that you wish to conduct a study of health center usage across students at each social work program in your state. Just imagine trying to create a list of every single social work student in the state. Even if you could find a way to generate such a list, attempting to do so might not be the most practical use of your time or resources. When this is the case, researchers turn to cluster sampling. Cluster sampling  occurs when a researcher begins by sampling groups (or clusters) of population elements and then selects elements from within those groups.

For a population of six clusters of two students each, two clusters were selected for the sample

Let’s work through how we might use cluster sampling. While creating a list of all social work students in your state would be next to impossible, you could easily create a list of all social work programs in your state. Then, you could draw a random sample of social work programs (your cluster) and then draw another random sample of elements (in this case, social work students) from each of the programs you randomly selected from the list of all programs.

Cluster sampling often works in stages. In this example, we sampled in two stages—(1) social work programs and (2) social work students at each program we selected. However, we could add another stage if it made sense to do so. We could randomly select (1) states in the United States (2) social work programs in that state and (3) individual social work students. As you might have guessed, sampling in multiple stages does introduce a  greater   possibility of error. Each stage is subject to its own sampling problems. But, cluster sampling is nevertheless a highly efficient method.

Jessica Holt and Wayne Gillespie (2008) [3] used cluster sampling in their study of students’ experiences with violence in intimate relationships. Specifically, the researchers randomly selected 14 classes on their campus and then drew a random sub-sample of students from those classes. But you probably know from your experience with college classes that not all classes are the same size. So, if Holt and Gillespie had simply randomly selected 14 classes and then selected the same number of students from each class to complete their survey, then students in the smaller of those classes would have had a greater chance of being selected for the study than students in the larger classes. Keep in mind, with random sampling the goal is to make sure that each element has the same chance of being selected. When clusters are of different sizes, as in the example of sampling college classes, researchers often use a method called probability proportionate to size  (PPS). This means that they take into account that their clusters are of different sizes. They do this by giving clusters different chances of being selected based on their size so that each element within those clusters winds up having an equal chance of being selected.

To summarize, probability samples allow a researcher to make conclusions about larger groups. Probability samples require a sampling frame from which elements, usually human beings, can be selected at random from a list. The use of random selection reduces the error and bias present in non-probability samples, which we will discuss in greater detail in section 10.3, though some error will always remain. In relying on a random number table or generator, researchers can more accurately state that their sample represents the population from which it was drawn. This strength is common to all probability sampling approaches summarized in Table 10.2.

In determining which probability sampling approach makes the most sense for your project, it helps to know more about your population. A simple random sample and systematic sample are relatively similar to carry out. They both require a list all elements in your sampling frame. Systematic sampling is slightly easier in that it does not require you to use a random number generator, instead using a sampling interval that is easy to calculate by hand.

However, the relative simplicity of both approaches is counterweighted by their lack of sensitivity to characteristics of your population. Stratified samples can better account for periodicity by creating strata that reduce or eliminate its effects. Stratified sampling also ensure that smaller subgroups are included in your sample, thereby making your sample more representative of the overall population. While these benefits are important, creating strata for this purpose requires having information about your population before beginning the sampling process. In our social work student example, we would need to know which students are full-time or part-time, graduate or undergraduate, in order to make sure our sample contained the same proportions. Would you know if someone was a graduate student or part-time student, just based on their email address? If the true population parameters are unknown, stratified sampling becomes significantly more challenging.

Common to each of the previous probability sampling approaches is the necessity of using a real list of all elements in your sampling frame. Cluster sampling is different. It allows a researcher to perform probability sampling in cases for which a list of elements is not available or feasible to create. Cluster sampling is also useful for making claims about a larger population (in our previous example, all social work students within a state). However, because sampling occurs at multiple stages in the process, (in our previous example, at the university and student level), sampling error increases. For many researchers, the benefits of cluster sampling outweigh this weaknesses.

Matching recruitment and sampling approach

Recruitment must match the sampling approach you choose in section 10.2. For many students, that will mean using recruitment techniques most relevant to availability sampling. These may include public postings such as flyers, mass emails, or social media posts. However, these methods would not make sense for a study using probability sampling. Probability sampling requires a list of names or other identifying information so you can use a random process to generate a list of people to recruit into your sample. Posting a flyer or social media message means you don’t know who is looking at the flyer, and thus, your sample could not be randomly drawn. Probability sampling often requires knowing how to contact specific participants. For example, you may do as I did, and contact potential participants via phone and email. Even then, it’s important to note that not everyone you contact will enter your study. We will discuss more about evaluating the quality of your sample in section 10.3.

  • Probability sampling approaches are more accurate when the researcher wants to generalize from a smaller sample to a larger population. However, non-probability sampling approaches are often more feasible. You will have to weigh advantages and disadvantages of each when designing your project.
  • There are many kinds of probability sampling approaches, though each require you know some information about people who potentially would participate in your study.
  • Probability sampling also requires that you assign people within the sampling frame a number and select using a truly random process.

Building on the step-by-step sampling plan from the exercises in section 10.1:

  • Identify one of the sampling approaches listed in this chapter that might be appropriate to answering your question and list the strengths and limitations of it.
  • Describe how you will recruit your participants and how your plan makes sense with the sampling approach you identified.

Examine one of the empirical articles from your literature review.

  • Identify what sampling approach they used and how they carried it out from start to finish.

10.3 Sample quality

  • Assess whether your sampling plan is likely to produce a sample that is representative of the population you want to draw conclusions about
  • Identify the considerations that go into producing a representative sample and determining sample size
  • Distinguish between error and bias in a sample and explain the factors that lead to each

Okay, so you’ve chosen where you’re going to get your data (setting), what characteristics you want and don’t want in your sample (inclusion/exclusion criteria), and how you will select and recruit participants (sampling approach and recruitment). That means you are done, right? (I mean, there’s an entire section here, so probably not.) Even if you make good choices and do everything the way you’re supposed to, you can still draw a poor sample. If you are investigating a research question using quantitative methods, the best choice is some kind of probability sampling, but aside from that, how do you know a good sample from a bad sample? As an example, we’ll use a bad sample I collected as part of a research project that didn’t go so well. Hopefully, your sampling will go much better than mine did, but we can always learn from what didn’t work.

the impact of quantitative research in social work

Representativeness

A representative sample is, “a sample that looks like the population from which it was selected in all respects that are potentially relevant to the study” (Engel & Schutt, 2011). [4] For my study on how much it costs to get an LCSW in each state, I did not get a sample that looked like the overall population to which I wanted to generalize. My sample had a few states with more than ten responses and most states with no responses. That does not look like the true distribution of social workers across the country. I could compare the number of social workers in each state, based on data from the National Association of Social Workers, or the number of recent clinical MSW graduates from the Council on Social Work Education. More than that, I could see whether my sample matched the overall population of clinical social workers in gender, race, age, or any other important characteristics. Sadly, it wasn’t even close. So, I wasn’t able to use the data to publish a report.

Critique the representativeness of the sample you are planning to gather.

  • Will the sample of people (or documents) look like the population to which you want to generalize?
  • Specifically, what characteristics are important in determining whether a sample is representative of the population? How do these characteristics relate to your research question?

Consider returning to this question once you have completed the sampling process and evaluate whether the sample in your study was similar to what you designed in this section.

Many of my students erroneously assume that using a probability sampling technique will guarantee a representative sample. This is not true. Engel and Schutt (2011) identify that probability sampling increases the chance of representativeness; however, it does not guarantee that the sample will be representative. If a representative sample is important to your study, it would be best to use a sampling approach that allows you to control the proportion of specific characteristics in your sample. For instance, stratified random sampling allows you to control the distribution of specific variables of interest within your sample. However, that requires knowing information about your participants before you hand them surveys or expose them to an experiment.

In my study, if I wanted to make sure I had a certain number of people from each state (state being the strata), making the proportion of social workers from each state in my sample similar to the overall population, I would need to know which email addresses were from which states. That was not information I had. So, instead I conducted simple random sampling and randomly selected 5,000 of 100,000 email addresses on the NASW list. There was less of a guarantee of representativeness, but whatever variation existed between my sample and the population would be due to random chance. This would not be true for an availability or convenience sample. While these sampling approaches are common for student projects, they come with significant limitations in that variation between the sample and population is due to factors other than chance. We will discuss these non-random differences later in the chapter when we talk about bias. For now, just remember that the representativeness of a sample is helped by using random sampling, though it is not a guarantee.

  • Before you start sampling, do you know enough about your sampling frame to use stratified random sampling, which increases the potential of getting a representative sample?
  • Do you have enough information about your sampling frame to use another probability sampling approach like simple random sampling or cluster sampling?
  • If little information is available on which to select people, are you using availability sampling? Remember that availability sampling is okay if it is the only approach that is feasible for the researcher, but it comes with significant limitations when drawing conclusions about a larger population.

Assessing representativeness should start prior to data collection. I mentioned that I drew my sample from the NASW email list, which (like most organizations) they sell to advertisers when companies or researchers need to advertise to social workers. How representative of my population is my sampling frame? Well, the first question to ask is what proportion of my sampling frame would actually meet my exclusion and inclusion criteria. Since my study focused specifically on clinical social workers, my sampling frame likely included social workers who were not clinical social workers, like macro social workers or social work managers. However, I knew, based on the information from NASW marketers, that many people who received my recruitment email would be clinical social workers or those working towards licensure, so I was satisfied with that. Anyone who didn’t meet my inclusion criteria and opened the survey would be greeted with clear instructions that this survey did not apply to them.

At the same time, I should have assessed whether the demographics of the NASW email list and the demographics of clinical social workers more broadly were similar. Unfortunately, this was not information I could gather. I had to trust that this was likely to going to be the best sample I could draw and the most representative of all social workers.

  • Before you start, what do you know about your setting and potential participants?
  • Are there likely to be enough people in the setting of your study who meet the inclusion criteria?

You want to avoid throwing out half of the surveys you get back because the respondents aren’t a part of your target population. This is a common error I see in student proposals.

Many of you will sample people from your agency, like clients or staff. Let’s say you work for a children’s mental health agency, and you wanted to study children who have experienced abuse. Walking through the steps here might proceed like this:

  • Think about or ask your coworkers how many of the clients at your agency have experienced this issue. If it’s common, then clients at your agency would probably make a good sampling frame for your study. If not, then you may want to adjust your research question or consider a different agency to sample. You could also change your target population to be more representative with your sample. For example, while your agency’s clients may not be representative of all children who have survived abuse, they may be more representative of abuse survivors in your state, region, or county. In this way, you can draw conclusions about a smaller population, rather than everyone in the world who is a victim of child abuse.
  • Think about those characteristics that are important for individuals in your sample to have or not have. Obviously, the variables in your research question are important, but so are the variables related to it. Take a look at the empirical literature on your topic. Are there different demographic characteristics or covariates that are relevant to your topic?
  • All of this assumes that you can actually access information about your sampling frame prior to collecting data. This is a challenge in the real world. Even if you ask around your office about client characteristics, there is no way for you to know for sure until you complete your study whether it was the most representative sampling frame you could find. When in doubt, go with whatever is feasible and address any shortcomings in sampling within the limitations section of your research report. A good project is a done project.
  • While using a probability sampling approach helps with sample representativeness, it does not guarantee it. Due to random variation, samples may differ across important characteristics. If you can feasibly use a probability sampling approach, particularly stratified random sampling, it will help make your sample more representative of the population.
  • Even if you choose a sampling frame that is representative of your population and use a probability sampling approach, there is no guarantee that the sample you are able to collect will be representative. Sometimes, people don’t respond to your recruitment efforts. Other times, random chance will mean people differ on important characteristics from your target population. ¯\_(ツ)_/¯

In agency-based samples, the small size of the pool of potential participants makes it very likely that your sample will not be representative of a broader target population. Sometimes, researchers look for specific outcomes connected with sub-populations for that reason. Not all agency-based research is concerned with representativeness, and it is still worthwhile to pursue research that is relevant to only one location as its purpose is often to improve social work practice.

the impact of quantitative research in social work

Sample size

Let’s assume you have found a representative sampling frame, and that you are using one of the probability sampling approaches we reviewed in section 10.2. That should help you recruit a representative sample, but how many people do you need to recruit into your sample? As with many questions about sample quality, students should keep feasibility in mind. The easiest answer I’ve given as a professor is, “as many as you can, without hurting yourself.” While your quantitative research question would likely benefit from hundreds or thousands of respondents, that is not likely to be feasible for a student who is working full-time, interning part-time, and in school full-time. Don’t feel like your study has to be perfect, but make sure you note any limitations in your final report.

To the extent possible, you should gather as many people as you can in your sample who meet your criteria. But why? Let’s think about an example you probably know well. Have you ever watched the TV show Family Feud ? Each question the host reads off starts with, “we asked 100 people…” Believe it or not,  Family Feud uses simple random sampling to conduct their surveys the American public. Part of the challenge on  Family Feud is that people can usually guess the most popular answers, but those answers that only a few people chose are much harder. They seem bizarre, and are more difficult to guess. That’s because 100 people is not a lot of people to sample. Essentially, Family Feud is trying to measure what the answer is for all 327 million people in the United States by asking 100 of them. As a result, the weird and idiosyncratic responses of a few people are likely to remain on the board as answers, and contestants have to guess answers fewer and fewer people in the sample provided. In a larger sample, the oddball answers would likely fade away and only the most popular answers would be represented on the game show’s board.

In my ill-fated study of clinical social workers, I received 87 complete responses. That is far below the hundred thousand licensed or license-eligible clinical social workers. Moreover, since I wanted to conduct state-by-state estimates, there was no way I had enough people in each state to do so. For student projects, samples of 50-100 participants are more than enough to write a paper (or start a game show), but for projects in the real world with real-world consequences, it is important to recruit the appropriate number of participants. For example, if your agency conducts a community scan of people in your service area on what services they need, the results will inform the direction of your agency, which grants they apply for, who they hire, and its mission for the next several years. Being overly confident in your sample could result in wasted resources for clients.

So what is the right number? Theoretically, we could gradually increase the sample size so that the sample approaches closer and closer to the total size of the population (Bhattacherjeee, 2012). [5] But as we’ve talked about, it is not feasible to sample everyone. How do we find that middle ground? To answer this, we need to understand the sampling distribution . Imagine in your agency’s survey of the community, you took three different probability samples from your community, and for each sample, you measured whether people experienced domestic violence. If each random sample was truly representative of the population, then your rate of domestic violence from the three random samples would be about the same and equal to the true value in the population.

But this is extremely unlikely, given that each random sample will likely constitute a different subset of the population, and hence, the rate of domestic violence you measure may be slightly different from sample to sample. Think about the sample you collect as existing on a distribution of infinite possible samples. Most samples you collect will be close to the population mean but many will not be. The degree to which they differ is associated with how much the subject you are sampling about varies in the population. In our example, samples will vary based on how varied the incidence of domestic violence is from person to person. The difference between the domestic violence rate we find and the rate for our overall population is called the sampling error .

An easy way to minimize sampling error is to increase the number of participants in your sample, but in actuality, minimizing sampling error relies on a number of factors outside of the scope of a basic student project. You can see this online textbook for more examples on sampling distributions or take an advanced methods course at your university, particularly if you are considering becoming a social work researcher. Increasing the number of people in your sample also increases your study’s power , or the odds you will detect a significant relationship between variables when one is truly present in your sample. If you intend on publishing the findings of your student project, it is worth using a power analysis to determine the appropriate sample size for your project. You can follow this excellent video series from the Center for Open Science on how to conduct power analyses using free statistics software. A faculty members who teaches research or statistics could check your work. You may be surprised to find out that there is a point at which you adding more people to your sample will not make your study any better.

Honestly, I did not do a power analysis for my study. Instead, I asked for 5,000 surveys with the hope that 1,000 would come back. Given that only 87 came back, a power analysis conducted after the survey was complete would likely to reveal that I did not have enough statistical power to answer my research questions. For your projects, try to get as many respondents as you feasibly can, but don’t worry too much about not reaching the optimal amount of people to maximize the power of your study unless you goal is to publish something that is generalizable to a large population.

A final consideration is which statistical test you plan to use to analyze your data. We have not covered statistics yet, though we will provide a brief introduction to basic statistics in this textbook. For now, remember that some statistical tests have a minimum number of people that must be present in the sample in order to conduct the analysis. You will complete a data analysis plan before you begin your project and start sampling, so you can always increase the number of participants you plan to recruit based on what you learn in the next few chapters.

  • How many people can you feasibly sample in the time you have to complete your project?

the impact of quantitative research in social work

One of the interesting things about surveying professionals is that sometimes, they email you about what they perceive to be a problem with your study. I got an email from a well-meaning participant in my LCSW study saying that my results were going to be biased! She pointed out that respondents who had been in practice a long time, before clinical supervision was required, would not have paid anything for supervision. This would lead me to draw conclusions that supervision was cheap, when in fact, it was expensive. My email back to her explained that she hit on one of my hypotheses, that social workers in practice for a longer period of time faced fewer costs to becoming licensed. Her email reinforced that I needed to account for the impact of length of practice on the costs of licensure I found across the sample. She was right to be on the lookout for bias in the sample.

One of the key questions you can ask is if there is something about your process that makes it more likely you will select a certain type of person for your sample, making it less representative of the overall population. In my project, it’s worth thinking more about who is more likely to respond to an email advertisement for a research study. I know that my work email and personal email filter out advertisements, so it’s unlikely I would even see the recruitment for my own study (probably something I should have thought about before using grant funds to sample the NASW email list). Perhaps an older demographic that does not screen advertisements as closely, o r those whose NASW account was linked to a personal email with fewer junk filters would be more likely to respond. To the extent I made conclusions about clinical social workers of all ages based on a sample that was biased towards older social workers, my results would be biased. This is called selection bias , or the degree to which people in my sample differ from the overall population.

Another potential source of bias here is nonresponse bias . Because people do not often respond to email advertisements (no matter how well-written they are), my sample is likely to be representative of people with characteristics that make them more likely to respond. They may have more time on their hands to take surveys and respond to their junk mail. To the extent that the sample is comprised of social workers with a lot of time on their hands (who are those people?) my sample will be biased and not representative of the overall population.

It’s important to note that both bias and error describe how samples differ from the overall population. Error describes random variations between samples, due to chance. Using a random process to recruit participants into a sample means you will have random variation between the sample and the population. Bias creates variance between the sample and population in a specific direction, such as towards those who have time to check their junk mail. Bias may be introduced by the sampling method used or due to conscious or unconscious bias introduced by the researcher (Rubin & Babbie, 2017). [6] A researcher might select people who “look like good research participants,” in the process transferring their unconscious biases to their sample. They might exclude people from the sampling from who “would not do well with the intervention.” Careful researchers can avoid these, but unconscious and structural biases can be challenging to root out.

  • Identify potential sources of bias in your sample and brainstorm ways you can minimize them, if possible.

Critical considerations

Think back to you undergraduate degree. Did you ever participate in a research project as part of an introductory psychology or sociology course? Social science researchers on college campuses have a luxury that researchers elsewhere may not share—they have access to a whole bunch of (presumably) willing and able human guinea pigs. But that luxury comes at a cost—sample representativeness. One study of top academic journals in psychology found that over two-thirds (68%) of participants in studies published by those journals were based on samples drawn in the United States (Arnett, 2008). [7] Further, the study found that two-thirds of the work that derived from US samples published in the Journal of Personality and Social Psychology was based on samples made up entirely of American undergraduate students taking psychology courses.

These findings certainly raise the question: What do we actually learn from social science studies and about whom do we learn it? That is exactly the concern raised by Joseph Henrich and colleagues (Henrich, Heine, & Norenzayan, 2010), [8] authors of the article “The Weirdest People in the World?” In their piece, Henrich and colleagues point out that behavioral scientists very commonly make sweeping claims about human nature based on samples drawn only from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) societies, and often based on even narrower samples, as is the case with many studies relying on samples drawn from college classrooms. As it turns out, robust findings about the nature of human behavior when it comes to fairness, cooperation, visual perception, trust, and other behaviors are based on studies that excluded participants from outside the United States and sometimes excluded anyone outside the college classroom (Begley, 2010). [9] This certainly raises questions about what we really know about human behavior as opposed to US resident or US undergraduate behavior. Of course, not all research findings are based on samples of WEIRD folks like college students. But even then, it would behoove us to pay attention to the population on which studies are based and the claims being made about those to whom the studies apply.

Another thing to keep in mind is that just because a sample may be representative in all respects that a researcher thinks are relevant, there may be relevant aspects that didn’t occur to the researcher when she was drawing her sample. You might not think that a person’s phone would have much to do with their voting preferences, for example. But had pollsters making predictions about the results of the 2008 presidential election not been careful to include both cell phone-only and landline households in their surveys, it is possible that their predictions would have underestimated Barack Obama’s lead over John McCain because Obama was much more popular among cell phone-only users than McCain (Keeter, Dimock, & Christian, 2008). [10] This is another example of bias.

the impact of quantitative research in social work

Putting it all together

So how do we know how good our sample is or how good the samples gathered by other researchers are? While there might not be any magic or always-true rules we can apply, there are a couple of things we can keep in mind as we read the claims researchers make about their findings.

First, remember that sample quality is determined only by the sample actually obtained, not by the sampling method itself. A researcher may set out to administer a survey to a representative sample by correctly employing a random sampling approach with impeccable recruitment materials. But, if only a handful of the people sampled actually respond to the survey, the researcher should not make claims like their sample went according to plan.

Another thing to keep in mind, as demonstrated by the preceding discussion, is that researchers may be drawn to talking about implications of their findings as though they apply to some group other than the population actually sampled. Whether the sampling frame does not match the population or the sample and population differ on important criteria, the resulting sampling error can lead to bad science.

We’ve talked previously about the perils of generalizing social science findings from graduate students in the United States and other Western countries to all cultures in the world, imposing a Western view as the right and correct view of the social world. As consumers of theory and research, it is our responsibility to be attentive to this sort of (likely unintentional) bait and switch. And as researchers, it is our responsibility to make sure that we only make conclusions from samples that are representative. A larger sample size and probability sampling can improve the representativeness and generalizability of the study’s findings to larger populations, though neither are guarantees.

Finally, keep in mind that a sample allowing for comparisons of theoretically important concepts or variables is certainly better than one that does not allow for such comparisons. In a study based on a nonrepresentative sample, for example, we can learn about the strength of our social theories by comparing relevant aspects of social processes. We talked about this as theory-testing in Chapter 8 .

At their core, questions about sample quality should address who has been sampled, how they were sampled, and for what purpose they were sampled. Being able to answer those questions will help you better understand, and more responsibly interpret, research results. For your study, keep the following questions in mind.

  • Are your sample size and your sampling approach appropriate for your research question?
  • How much do you know about your sampling frame ahead of time? How will that impact the feasibility of different sampling approaches?
  • What gatekeepers and stakeholders are necessary to engage in order to access your sampling frame?
  • Are there any ethical issues that may make it difficult to sample those who have first-hand knowledge about your topic?
  • Does your sampling frame look like your population along important characteristics? Once you get your data, ask the same question of the sample you successfully recruit.
  • What about your population might make it more difficult or easier to sample?
  • Are there steps in your sampling procedure that may bias your sample to render it not representative of the population?
  • If you want to skip sampling altogether, are there sources of secondary data you can use? Or might you be able to answer you questions by sampling documents or media, rather than people?
  • The sampling plan you implement should have a reasonable likelihood of producing a representative sample. Student projects are given more leeway with nonrepresentative samples, and this limitation should be discussed in the student’s research report.
  • Researchers should conduct a power analysis to determine sample size, though quantitative student projects should endeavor to recruit as many participants as possible. Sample size impacts representativeness of the sample, its power, and which statistical tests can be conducted.
  • The sample you collect is one of an infinite number of potential samples that could have been drawn. To the extent the data in your sample varies from the data in the entire population, it includes some error or bias. Error is the result of random variations. Bias is systematic error that pushes the data in a given direction.
  • Even if you do everything right, there is no guarantee that you will draw a good sample. Flawed samples are okay to use as examples in the classroom, but the results of your research would have limited generalizability beyond your specific participants.
  • Historically, samples were drawn from dominant groups and generalized to all people. This shortcoming is a limitation of some social science literature and should be considered a colonialist scientific practice.
  • I clearly need a snack. ↵
  • Johnson, P. S., & Johnson, M. W. (2014). Investigation of “bath salts” use patterns within an online sample of users in the United States. Journal of psychoactive drugs ,  46 (5), 369-378. ↵
  • Holt, J. L., & Gillespie, W. (2008). Intergenerational transmission of violence, threatened egoism, and reciprocity: A test of multiple psychosocial factors affecting intimate partner violence.  American  Journal of Criminal Justice, 33 , 252–266. ↵
  • Engel, R. & Schutt (2011). The practice of research in social work (2nd ed.) . California: SAGE ↵
  • Bhattacherjee, A. (2012). Social science research: Principles, methods, and practices . Retrieved from: https://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1002&context=oa_textbooks ↵
  • Rubin, C. & Babbie, S. (2017). Research methods for social work (9th edition) . Boston, MA: Cengage. ↵
  • Arnett, J. J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist , 63, 602–614. ↵
  • Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences , 33, 61–135. ↵
  • Newsweek magazine published an interesting story about Henrich and his colleague’s study: Begley, S. (2010). What’s really human? The trouble with student guinea pigs. Retrieved from http://www.newsweek.com/2010/07/23/what-s-really-human.html ↵
  • Keeter, S., Dimock, M., & Christian, L. (2008). Calling cell phones in ’08 pre-election polls. The Pew Research Center for the People and the Press . Retrieved from  http://people-press.org/files/legacy-pdf/cell-phone-commentary.pdf ↵

entity that a researcher wants to say something about at the end of her study (individual, group, or organization)

the entities that a researcher actually observes, measures, or collects in the course of trying to learn something about her unit of analysis (individuals, groups, or organizations)

the larger group of people you want to be able to make conclusions about based on the conclusions you draw from the people in your sample

the list of people from which a researcher will draw her sample

the people or organizations who control access to the population you want to study

an administrative body established to protect the rights and welfare of human research subjects recruited to participate in research activities conducted under the auspices of the institution with which it is affiliated

Inclusion criteria are general requirements a person must possess to be a part of your sample.

characteristics that disqualify a person from being included in a sample

the process by which the researcher informs potential participants about the study and attempts to get them to participate

the group of people you successfully recruit from your sampling frame to participate in your study

sampling approaches for which a person’s likelihood of being selected from the sampling frame is known

sampling approaches for which a person’s likelihood of being selected for membership in the sample is unknown

researcher gathers data from whatever cases happen to be convenient or available

(as in generalization) to make claims about a large population based on a smaller sample of people or items

selecting elements from a list using randomly generated numbers

the units in your sampling frame, usually people or documents

selecting every kth element from your sampling frame

the distance between the elements you select for inclusion in your study

the tendency for a pattern to occur at regular intervals

dividing the study population into subgroups based on a characteristic (or strata) and then drawing a sample from each subgroup

the characteristic by which the sample is divided in stratified random sampling

a sampling approach that begins by sampling groups (or clusters) of population elements and then selects elements from within those groups

in cluster sampling, giving clusters different chances of being selected based on their size so that each element within those clusters has an equal chance of being selected

a sample that looks like the population from which it was selected in all respects that are potentially relevant to the study

the set of all possible samples you could possibly draw for your study

The difference between what you find in a sample and what actually exists in the population from which the sample was drawn.

the odds you will detect a significant relationship between variables when one is truly present in your sample

the degree to which people in my sample differs from the overall population

The bias that occurs when those who respond to your request to participate in a study are different from those who do not respond to you request to participate in a study.

Graduate research methods in social work Copyright © 2021 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

4.3 Quantitative research questions

Learning objectives.

  • Describe how research questions for exploratory, descriptive, and explanatory quantitative questions differ and how to phrase them
  • Identify the differences between and provide examples of strong and weak explanatory research questions

Quantitative descriptive questions

The type of research you are conducting will impact the research question that you ask. Probably the easiest questions to think of are quantitative descriptive questions. For example, “What is the average student debt load of MSW students?” is a descriptive question—and an important one. We aren’t trying to build a causal relationship here. We’re simply trying to describe how much debt MSW students carry. Quantitative descriptive questions like this one are helpful in social work practice as part of community scans, in which human service agencies survey the various needs of the community they serve. If the scan reveals that the community requires more services related to housing, child care, or day treatment for people with disabilities, a nonprofit office can use the community scan to create new programs that meet a defined community need.

an illuminated street sign that reads "ask"

Quantitative descriptive questions will often ask for percentage, count the number of instances of a phenomenon, or determine an average. Descriptive questions may only include one variable, such as ours about debt load, or they may include multiple variables. Because these are descriptive questions, we cannot investigate causal relationships between variables. To do that, we need to use a quantitative explanatory question.

Quantitative explanatory questions

Most studies you read in the academic literature will be quantitative and explanatory. Why is that? Explanatory research tries to build something called nomothetic causal explanations.Matthew DeCarlo says “com[ing]up with a broad, sweeping explanation that is universally true for all people” is the hallmark of nomothetic causal relationships (DeCarlo, 2018, chapter 7.2, para 5 ). They are generalizable across space and time, so they are applicable to a wide audience. The editorial board of a journal wants to make sure their content will be useful to as many people as possible, so it’s not surprising that quantitative research dominates the academic literature.

Structurally, quantitative explanatory questions must contain an independent variable and dependent variable. Questions should ask about the relation between these variables. A standard format for an explanatory quantitative research question is: “What is the relation between [independent variable] and [dependent variable] for [target population]?” You should play with the wording for your research question, revising it as you see fit. The goal is to make the research question reflect what you really want to know in your study.

Let’s take a look at a few more examples of possible research questions and consider the relative strengths and weaknesses of each. Table 4.1 does just that. While reading the table, keep in mind that it only includes some of the most relevant strengths and weaknesses of each question. Certainly each question may have additional strengths and weaknesses not noted in the table.

Making it more specific

A good research question should also be specific and clear about the concepts it addresses. A group of students investigating gender and household tasks knows what they mean by “household tasks.” You likely also have an impression of what “household tasks” means. But are your definition and the students’ definition the same? A participant in their study may think that managing finances and performing home maintenance are household tasks, but the researcher may be interested in other tasks like childcare or cleaning. The only way to ensure your study stays focused and clear is to be specific about what you mean by a concept. The student in our example could pick a specific household task that was interesting to them or that the literature indicated was important—for example, childcare. Or, the student could have a broader view of household tasks, one that encompasses childcare, food preparation, financial management, home repair, and care for relatives. Any option is probably okay, as long as the researchers are clear on what they mean by “household tasks.”

Table 4.2 contains some “watch words” that indicate you may need to be more specific about the concepts in your research question.

It can be challenging in social work research to be this specific, particularly when you are just starting out your investigation of the topic. If you’ve only read one or two articles on the topic, it can be hard to know what you are interested in studying. Broad questions like “What are the causes of chronic homelessness, and what can be done to prevent it?” are common at the beginning stages of a research project. However, social work research demands that you examine the literature on the topic and refine your question over time to be more specific and clear before you begin your study. Perhaps you want to study the effect of a specific anti-homelessness program that you found in the literature. Maybe there is a particular model to fighting homelessness, like Housing First or transitional housing that you want to investigate further. You may want to focus on a potential cause of homelessness such as LGBTQ discrimination that you find interesting or relevant to your practice. As you can see, the possibilities for making your question more specific are almost infinite.

Quantitative exploratory questions

In exploratory research, the researcher doesn’t quite know the lay of the land yet. If someone is proposing to conduct an exploratory quantitative project, the watch words highlighted in Table 4.2 are not problematic at all. In fact, questions such as “What factors influence the removal of children in child welfare cases?” are good because they will explore a variety of factors or causes. In this question, the independent variable is less clearly written, but the dependent variable, family preservation outcomes, is quite clearly written. The inverse can also be true. If we were to ask, “What outcomes are associated with family preservation services in child welfare?”, we would have a clear independent variable, family preservation services, but an unclear dependent variable, outcomes. Because we are only conducting exploratory research on a topic, we may not have an idea of what concepts may comprise our “outcomes” or “factors.” Only after interacting with our participants will we be able to understand which concepts are important.

Key Takeaways

  • Quantitative descriptive questions are helpful for community scans but cannot investigate causal relationships between variables.
  • Quantitative explanatory questions must include an independent and dependent variable.

Image attributions

Ask by terimakasih0 cc-0.

Guidebook for Social Work Literature Reviews and Research Questions Copyright © 2020 by Rebecca Mauldin and Matthew DeCarlo is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Can Quantitative Research Solve Social Problems? Pragmatism and the Ethics of Social Research

  • Open access
  • Published: 13 June 2019
  • Volume 167 , pages 41–48, ( 2020 )

Cite this article

You have full access to this open access article

the impact of quantitative research in social work

  • Thomas C. Powell 1  

24k Accesses

7 Citations

3 Altmetric

Explore all metrics

Journal of Business Ethics recently published a critique of ethical practices in quantitative research by Zyphur and Pierides (J Bus Ethics 143:1–16, 2017). The authors argued that quantitative research prevents researchers from addressing urgent problems facing humanity today, such as poverty, racial inequality, and climate change. I offer comments and observations on the authors’ critique. I agree with the authors in many areas of philosophy, ethics, and social research, while making suggestions for clarification and development. Interpreting the paper through the pragmatism of William James, I suggest that the authors’ arguments are unlikely to change attitudes in traditional quantitative research, though they may point the way to a new worldview, or Jamesian “sub-world,” in social research.

Avoid common mistakes on your manuscript.

Introduction

I was invited by the editors of this journal to comment on an article called “Is Quantitative Research Ethical? Tools for Ethically Practicing, Evaluating, and Using Quantitative Research” (Zyphur and Pierides 2017 ). The topic of the article is important and of great intrinsic interest to me, so I am pleased to offer this commentary. As will become clear, I agree with much of what the authors wrote, and with their overall philosophical orientation. The authors presented a compelling critique of traditional approaches to quantitative research (QR) in the social sciences, offering an ethical orientation for organizational research and providing helpful examples of ethical approaches to QR in management studies.

Of course, agreement makes uninteresting commentary and it is pointless to repeat the authors’ arguments and agree with them. What I have done instead is to play the devil’s advocate, selecting a few key points from the paper and evaluating them from the perspective of a traditional QR practitioner. For example, the authors argued that traditional approaches to QR are not objective but value-laden. I agree with this (see Powell 2001a ), but I evaluate whether the authors’ proposals alleviate this problem or make it worse. The authors argued that quantitative researchers should do their part to solve human problems. I agree with this (see Powell 2014a ), but I consider whether the problems they discussed were caused by quantitative methods, and whether it is reasonable to expect any research method to solve them. The authors endorsed a pragmatist epistemology as against foundationalist or “correspondence” epistemologies. I agree with this (see Powell 2001b , 2002 , 2003 , 2014b ), but I review the origins of pragmatist philosophy and consider whether pragmatism can legitimately be used to justify the social agendas of academic researchers.

My overall theme is that we should be careful what we wish for. If all social research is value-laden, then we should be wary of privileging the values of a particular community, including the community of socially minded elites working in universities, or of academic journal editors who aspire to make social impacts in the world. Many of us want to solve human problems but we should also examine ourselves, asking if we are the right people for the job, working in the right places, carrying the right tools. The authors’ pragmatic philosophy can help explain the nature of research practices, but it may not be the best platform for choosing between the competing values of academic researchers.

In the next section, I briefly summarize the argument in Zyphur and Pierides ( 2017 ) and provide overall comments. After that, I explore a few of the authors’ arguments in detail, considering their wider implications for social research. I conclude with a discussion of pragmatism in social research, examining the pragmatism of William James and its consequences for ethics in social research.

Overview of Zyphur and Pierides ( 2017 )

Zyphur and Pierides ( 2017 ) offered an ethical critique of traditional quantitative research (QR) in the social sciences. The authors challenged the philosophical foundations of traditional QR, such as its rationalist-materialist ontology and presumed scientific “objectivity.” They also challenged specific methods and practices in social research, including “best practices” in research design, sampling, measurement, hypothesis testing and data analysis.

The authors’ core message was that traditional QR practices are not objective but value-laden. Implicitly or otherwise, traditional QR takes an ethical position that impedes research that might address the real problems facing humankind today, such as racial inequality, poverty, corruption, and climate change. The authors did not reject QR as an enterprise, but criticized the assumption that standard QR “best practices” are objective and value-free. The authors called for a new “built for purpose” approach, in which researchers make their ethical purposes clear at the start, adapting research designs and analytical methods to those purposes.

The authors criticized traditional assumptions of “representation,” “correspondence,” and “probabilistic inference.” In traditional QR, researchers begin by defining theoretical constructs or variables that purportedly represent tangible or intangible objects in the real world. QR practitioners assume that these labels correspond to things in the external world with a degree of one-to-one accuracy; and they propose correlational or causal relations, which presumably correspond with how these objects relate in the real world. These relations are then tested by drawing samples which presumably correspond to larger populations. To establish the correspondence of sample results to populations, researchers use probabilistic inference, which they represent by statistical artefacts such as regression coefficients, confidence intervals and p -values.

The authors argued that the assumptions of traditional QR—that labels correspond to things; that hypothesized relations correspond to actual relations; that samples correspond to populations; that probabilistic inference allows valid inductions—are open to philosophical and methodological objections across the board, some of which the authors discussed in the paper. More to the point, however, traditional QR methods smuggle hidden values into the research process, dictating which research questions can be asked, how constructs can be measured, and how data can be gathered and analysed. According to the authors, these values, operating under a cover of scientific respectability, impede social researchers who would use QR to improve the human condition.

As an alternative to traditional QR, the authors proposed a “built for purpose” approach, in which researchers do not mimic traditional QR, but develop QR practices capable of addressing real human problems in the world. Instead of starting with representations and correspondences, researchers should start with a clear ethical purpose—for example, to combat corporate corruption or to eliminate racial discrimination. Instead of focusing on concept validity, construct validity, and other correspondences, researchers should maximize “relational validity”—that is, the “mutual fitness” of research designs, analytical methods, and ethical purposes. According to the authors, “relational validity offers a novel response to the centuries old problem of induction.” (p. 12).

The authors argued that a “built for purpose” approach requires new perspectives and research practices, which they called “orientations” and “ways of doing.” The new “orientations” require researchers to begin by asking, “Who is the research for?” “What is it trying to achieve?” and “How will it improve the human condition?” The new “ways of doing” require researchers to adapt research methods—measurement, sampling, data analysis, and causal inferences—to ethical purposes. In proposing this approach, the authors aimed to dispel traditional QR’s fixation on scientific objectivity by “putting QR to work for other purposes that are of greater concern—inequality, global warming, or corruption.” (p. 2).

Comments on Zyphur and Pierides ( 2017 )

To evaluate the paper on its own terms, I want to say what I think the authors were trying to do, and not trying to do. In particular, I do not think the authors were making an exhaustive technical critique of quantitative methods in social research. The authors criticized certain tendencies in QR practice—such as data-mining for low p -values (“ p -hacking”), and focusing on averages in regression analysis—but seemed less concerned with statistical technique than with the broader goal of “disrupting the universality” of scientific method. Their main recommendations—that researchers should focus on human problems, “built for purpose” research designs, and the “mutual fitness” of methods and purposes—could be applied equally to quantitative or qualitative research. If the authors had intended the paper as a technical deconstruction of QR in the social sciences, much more could have been said, and indeed has been said by other authors (e.g. Gelman 2015 ; Schwab et al. 2011 ; Simmons et al. 2011 ; Vul et al. 2009 ).

But the authors focused on a different point; namely, that traditional QR practices impede research on social problems even when these practices are used as they were intended. Researchers should avoid obvious errors in statistical inference, such as inferring causation from correlation. But if the real problem in QR is unthinking obedience to the orthodoxies of scientific method, the issues are behavioural rather than statistical. For example, replicability and generalizability are sound QR principles, but they incentivize research on repeatable problems while neglecting specific or non-replicable contexts; and representative sampling improves probabilistic inference, but many human problems involve minorities facing unique hardships. Hence the authors focused less on QR technique than on the need for new “orientations” and “ways of doing” in the choice and implementation of QR methods.

This approach places the authors in a tradition reminiscent of C. Wright Mills in The Sociological Imagination ( 1959 ). Mills criticized the “abstracted empiricism” of quantitative sociology in mid-20th century North America; that is, the trend of importing assumptions from the natural sciences, defining social constructs as if they were physical objects, and using statistical methods to define problems rather than the other way around. Like Zyphur and Pierides, Mills argued that social researchers should put the scientific method in the service of research problems: “Controversy over different views of ‘methodology’ and ‘theory’ is properly carried on in close and continuous relation with substantive problems.” (Mills 1959 , p. 83) Mills was concerned less with statistical methods than with the philosophies lurking beneath the supposed “objectivity” of quantitative social research:

As a matter of practice, abstracted empiricists often seem more concerned with the… Scientific Method. Methodology, in short, seems to determine the problems. And this, after all, is only to be expected. The Scientific Method that is projected here did not grow out of, and is not a generalization of, what are generally and correctly taken to be the classic lines of social science work. It has been largely drawn, with expedient modifications, from one philosophy of natural science. (Mills 1959 , pp. 39, 40)

The authors share not only Mills’ scepticism of scientific method but also his philosophical pragmatism. It is important to recognize the authors’ pragmatism and not to conflate it with ontologies aligned with nominalism, subjectivism, social constructionism, and postmodern social theory. The authors sympathize with these views, but to classify their position as “subjectivism” or “social constructionism” would be to misunderstand what they are saying. When the authors use a term like “correspondence,” they are not making a vague reference to similarity, but invoking the terminology of the pragmatist philosophy of science. Pragmatism anticipated many of the philosophical moves that would later characterize postmodern social theory, but the two approaches have different origins, and different consequences for social research.

Although the authors explained their pragmatism in an earlier paper (Zyphur et al. 2016 ), they might have done more in the current paper to guide readers through their philosophical position, linking pragmatism with their critique of traditional QR and recommendations for future QR practice. As it stands, the paper seems to blend non-pragmatist and pragmatist ideas together in a kind of free-floating subjectivist relativism that may strike some readers as confusing or unhelpful. For example, in explaining their approach, the authors used abstract language that is hard to place in any philosophical tradition:

To begin, we put forth two infinitely long and intersecting dimensions of QR practice that we call orientations and ways of doing, which connect purposes to QR practice. Instead of being ‘foundations’ or somehow fundamental in a representation correspondence sense, each category and its contents are akin to idioms or axiomatic lists that tend toward infinity because they can be populated indefinitely, limited only by the creativity of those who adopt them. They may also be orthogonal, indicating that each orientation can, at least in theory, be combined with any way of doing QR in order to achieve a given purpose. In what follows, we describe these dimensions, beginning to populate the lists that may constitute each dimension while illustrating the fruitfulness of combinations that emerge. However, there are two caveats to mentioned upfront which, if ignored, undermine our broader recommendations. (p. 4)

Due in part to the paper’s lack of clarity, both in language and narrative structure, a skeptical reader might argue that the authors have left their recommendations open to ethical misinterpretation, even by those who want to put them into practice. For example, a reader might interpret the authors’ “built for purpose” method roughly as follows: (1) Identify a serious social problem (poverty, racial inequality, climate change, etc.); (2) Choose a desired outcome (elimination of poverty, racial equality, climate stabilization, etc.); (3) Design a quantitative study that demonstrates the severity of the problem; (4) Use the quantitative results to campaign for social change that solves the problem.

This is an oversimplification of the authors’ advice, but a traditional QR practitioner might argue that the method ignores the crucial distinction between ethical outcomes and ethical processes . The received scientific method has many faults, but it recognizes, in principle, that scientists should not choose their desired outcomes or contrive their research processes to achieve those outcomes. Admittedly, scientists have abused scientific method in exactly this way, but the whole point of scientific method is to neutralize researchers’ preferences. Without a relatively objective process, researchers will choose the outcomes they want and manipulate research processes to achieve them. These manipulations may produce social changes, and some of the changes may be socially desirable—but this is not an ethical process unless we believe that “the ends justify the means” (consequentialist ethics) or that “bad people achieve their goals this way, so good people must do it too” (compensatory ethics). Either way, achieving social purposes comes at a high ethical price.

Similarly, a critic might stand behind the Rawlsian “veil of ignorance” (Rawls 1971 ) and ask: If we wanted accurate and reliable research on a social problem, would we prefer a team of researchers bound to a fixed research process they believe is ethical, or a team of researchers bound to a fixed social outcome they believe is ethical? Either team might have false beliefs, so the research process might actually be unethical, or the social outcome might be unethical. The problem is that a research team bound to a fixed outcome will reach the same conclusions whether the outcome is ethical or not, whereas a team that follows a fixed process has a chance of reaching new conclusions; and, if its process is ethical, of reaching conclusions independent of its own preferences. A reasonable person behind the Rawlsian veil might prefer a process capable of producing new or unbiased results, even if the process was imperfectly implemented.

Traditional quantitative researchers might also challenge the authors’ assumption that researchers who practice their methods are the ones who should define the world’s problems and decide which ideas get published. How do we know their values are trustworthy? If an ethical problem exists with scientific method, should we relocate our trust from the scientific community to a sub-community of socially minded university professors, journal editors and government funding agencies? Does this sub-community have a shared and coherent view of social ethics and human purposes—and if not, by what process will they define and prioritize social outcomes? What would stop an ambitious social researcher with political connections and a social media profile from hijacking the method to perpetrate mass social harm?

The authors rightly point out that traditional QR is not value-free, nor is scientific method in general. Scientists often allow quantitative methods to dictate research problems, and they have perverse incentives to find publishable results in their data. When QR is driven by methods indifferent to human purposes, it crowds out research that might address large-scale human problems. All of this is true. On the other hand, the authors’ claim that traditional QR is “ethics-laden” relies on the charge that it “produces an orientation toward ‘facts’ rather than ‘values.’” (p. 3). Therefore, the authors need to show how a commitment to facts undermines the solving of human problems, and how a commitment to values removes the ethical biases of traditional methods. Unfortunately the paper does not provide the needed clarity:

By separating facts from values, facts appear to be unrelated to ethics; and with a focus on facts, ethics appear irrelevant for QR validity… New understandings of validity are needed to address the ways that QR is an ethical act and ethically consequential. This ethicality may be unrelated to representation or correspondence, such as if QR is meant to produce images of society that change the way people think and act – an enactment of a reality that did not yet exist to be merely ‘represented’… (p. 7)

Many researchers, qualitative and quantitative alike, may disagree with the authors on the fundamental nature and purpose of social research. For example, the authors implicated traditional QR in the global financial crisis (p. 10), but many traditional researchers would reject any suggestion that the financial crisis can be laid at the doorstep of a research method—why not qualitative methods?—or that a research method can solve the world’s problems. Traditional QR practitioners would acknowledge that “Determine the nature and extent of human poverty” is a QR problem; but they would not acknowledge that “Eliminate human poverty” is a QR problem. In their worldview, it is a human problem, a social problem, an economic problem, a political problem, a gender problem, a racial problem, and many other kinds of problem. QR can help us understand what is going on in the domain of human poverty, and QR analysis provides input to policy-makers. But this does not prove that researchers should stop using “best practices,” but merely begs the question of whether traditional QR or “built for purpose” QR is the better method for understanding what is going on.

QR practitioners might also question the authors’ logical consistency in rejecting assumptions like “representation” and “correspondence.” Presumably, when the authors made statements about traditional QR—for example, that QR includes hypothesis testing and regression analysis—they were affirming that their sentences represented something, and that their ideas corresponded to something beyond words on a page. When the authors wrote “QR is often done in terms of representation and correspondence (Zyphur et al. 2016 )” (p. 2), they affirmed that this proposition, and the term “Zyphur et al. ( 2016 ),” corresponded to things and persons that possessed a reality independent of the words, even if that reality was a social construction rather than an objective material object. In other words, the authors seemed to be using representation and correspondence to critique representation and correspondence.

I think QR researchers will be especially interested in the illustrations of exemplary QR provided by the authors. Without critiquing the papers individually, the examples show that the search for better QR methods is fraught with pitfalls. Regardless of QR methods, social research is a human process concerned with human subjects. It is not obvious that researchers who follow the QR method of Zyphur and Pierides are behaving more ethically than researchers who follow traditional QR, even when they are researching worthy causes. The choice is not between ethical and unethical QR, but among a range of imperfect quantitative methods, each inviting its own forms of human error. Along with the authors, I hope that QR can make contributions to solving human problems; but I also believe that if we did not have something like traditional QR, we would have to invent it. Whatever its flaws, the question is not whether scientific method eliminates human error, but whether, among the imperfect alternatives available to us, it gives us the best explanations for what is going on in the world.

This is why we must be careful what we wish for. Humanity is confronted with many problems, and social researchers need to find a way to do their part. Whatever objections can be raised against the paper, I endorse what the authors are trying to achieve. But removing or reforming traditional QR will not solve the problems of the human condition because these problems are not caused by a research method. The problems of the human condition are caused by people, and shifting responsibility from one group of researchers to another will not improve the human condition. We should use QR to address human problems, while bearing in mind that the authors’ method confers power to solve the world’s problems on fallible people and institutions—academic elites, journal editors, and the governments, corporations and philanthropic agencies that fund social research—which have vested interests of their own, and are, to a significant degree, responsible for the problems we are trying to solve.

William James and Ethics in Management Research

I have written elsewhere about pragmatism and its relevance for organizational research Powell 2001a , 2002 , 2014a ). In this section, I want to show how pragmatism, or a version of it, relates to the ethical issues raised by Zyphur and Pierides. In particular, I want to examine the pragmatism of William James and its consequences for ethics in management research.

In doing this I am prompted by statements in Zyphur and Pierides ( 2017 ) and its predecessor (Zyphur et al. 2016 ), such as:

‘Best practices’… may be useful for the purpose of standardizing QR, but this… distracts from the task of putting QR to work for other purposes. (Zyphur and Pierides 2017 , p. 14) Inductive inference means actively working to enact research purposes, making research ‘true’ by helping it to shape the world. (Zyphur and Pierides 2017 , p. 13) Many pragmatists propose that organizing practical action is the point of thinking and speaking… (Zyphur et al. 2016 , p. 478) Instead of having to gather large samples to avoid statistical errors of inference, researchers would be better off trying to guard against actions that have unhelpful consequences. (Zyphur et al. 2016 , p. 478)

I agree with the authors’ turn to pragmatism and their rejection of conventional philosophical foundations. As a philosophy of science, pragmatism is concerned with human purposes, and the authors are right to cite pragmatism in supporting ethics in social research. However I hope that readers will not misconstrue pragmatism as a literal invitation to reject QR “best practices” while “putting QR to work for other purposes.” Philosophical pragmatism argues that most statements about truth and being (epistemology and ontology) can be resolved into statements about things people actually do, or might do. They do not resolve, however, into statements about what people should do. To explain why pragmatists make this distinction, and to show its consequences for the method proposed by Zyphur and Pierides, requires a brief digression on the origins of William James’s pragmatism.

William James developed his pragmatist philosophy over many years, exploring its consequences for psychology, epistemology, ontology and philosophy of science (e.g., James 1890a , 1902 , 1907 ). In the two-volume Principles of Psychology ( 1890b ), James linked psychology with philosophy through the concept of belief . He asked: What happens in the consciousness of a subjectively experiencing human being when confronted with a statement of fact, or proposition? What feelings are evoked? How does a person translate the words of a proposition into the state of consciousness we call “belief”?

James argued that belief occurs when a proposition evokes marked feelings of subjective rightness, like a puzzle piece fitting into place. Instead of discomfort or agitation, a proposition evokes a sense of cognitive and emotional harmony, producing a warm psychological glow of mental assent. In Jamesian psychology, belief is the warm psychological glow of mental assent. People justify their beliefs using underlying reasons or causes, such as sense data, logic, intuition, authority, persuasion, or common sense. But belief is a feeling, not a fact; and the belief that a belief is a fact and not a feeling, is also a feeling. In Jamesian psychology, a justification only counts as belief when it evokes the feeling, or psychological “yes” signal, that provides the glow of subjective rightness.

James’s theory of belief formed the psychological foundation for his pragmatism. An object becomes real for people when they believe in its existence; and a proposition becomes true for people when they believe in its truth. Scientific propositions become true for scientists when scientists believe in their truth. This does not mean that scientific propositions are arbitrary, or do not correspond with sense observations shared with other people. In Jamesian ontology, people have no direct access to metaphysical “truths” or “realities,” so their beliefs rely on sense data, logic, intuition, and other forms of justification. Indeed, other peoples’ beliefs are themselves unobservable, so we infer them from what people say and do. When we affirm that something is “true” for other people, what we mean is that we have observed people saying and doing things that make us believe that they believe that the thing is true. Human behaviour—what people say and do, including ourselves—comprises the whole of the evidence.

Despite his interest in the experiencing person, James did not see pragmatism as justifying a relativist or subjectivist social science, any more than it justified a positivist or objectivist one. James began as a physiologist, building his psychological theories on experimental physiology and the functional anatomy of the human brain; and his pragmatism did not deny the existence of the external world, the laws of propositional logic, or the validity of scientific method. James observed reality through the medium of human consciousness, which he held to speak for itself and not for things outside itself. James did not deny the existence of an external world, but saw the external world as becoming manifest through the meanings conferred on it in the perceptions and interpretations of human beings.

In Jamesian pragmatism, people adopt beliefs in order to solve problems in human consciousness, and these problems are infinitely varied. James did not propose pragmatism as a form of prescriptive advice to scientists, urging them to focus on final purposes instead of processes, or to “be practical” instead of thinking abstractly. The problems of poets and metaphysicians are not “practical” compared to the problems of engineers and airline pilots, but pragmatism does not urge poets to become more practical. Pragmatism is a descriptive theory of human ontology and epistemology which holds that people derive their ideas of truth and reality not from comparisons between propositions and realities, but by solving problems that arise in human consciousness. The theory makes no claims about the degree of “practicality” of interests or problems people may or should have.

Beyond affirming the centrality of human experience for psychology and philosophy of science, James argued that any phenomenon in the domain of human consciousness can become a legitimate object of human inquiry. Whatever has meaning for an experiencing human being has ontological reality, and there are no greater or lesser realities. The beliefs we call “scientific,” derived from theory and empirical observation, do not have higher ontological status, or “more reality,” than other propositions in human consciousness, and hence no superior claim on truth. Everything that enters human consciousness has the same eligibility for inquiry, examination and analysis, whether derived from sense experience, reasoning, dreams, delusions, hallucinations, mythology, or religious faith.

James saw the natural sciences as domains of human inquiry concerned with aspects of human consciousness associated with the natural world. While recognizing the widespread human impacts of science, James regarded the scientific community as one of many “sub-worlds” of human inquiry (see James 1890b , p. 291). A Jamesian sub-world, like a Wittgensteinian “language game,” denotes a community of inquiry with its own conventions, beliefs and vocabularies (on Wittgenstein’s reliance on James’s pragmatism, see Goodman 2002 ). By the conventions of the scientific sub-world, scientists follow a method of inquiry grounded in theory, quantification, measurement and experiment, while rejecting propositions grounded in private opinion or groundless speculation. James acknowledged the efficacy and significance of science, since its problems very often affect non-scientists, and solving them makes life better (or worse) for many people. At the same time, James affirmed that scientific problem-solving conventions do not give scientists superior access to extra-experiential realities, but function in human consciousness like other conventions.

James’s philosophy allowed for great pluralism among the sub-worlds of human inquiry; for example, there are sub-worlds of myth, literature, art and religion, each with its own conventions, beliefs, and vocabularies. James argued that these sub-worlds produced beliefs that were no less real or true, within their own conventions, than the beliefs of natural scientists—and that it was futile to judge the beliefs of one sub-world by the conventions of another: to the poet, a scientific discovery may seem misguided; or to a scientist, a religious inspiration may seem superstitious. Participants negotiate conventions within their own sub-worlds, but pragmatism does not make value judgments about the comparative legitimacy of sub-worlds. As in James’s The Varieties of Religious Experience ( 1902 ), any phenomenon that appears in human consciousness is fruitful subject matter for human inquiry.

And this is where ethics comes in. Pragmatists do not hold that truth is relative or a matter of preference. Each sub-world of inquiry follows its own problem-solving conventions, and these conventions become relatively hardened, even as they continue to evolve. Within the sub-world of science, truth is not groundless but evidence-based, and to pretend otherwise is not merely to get it wrong, but to behave unethically . Scientists can debate the kinds of evidence that bear on a particular question, but they cannot debate whether evidence bears on scientific questions. Theologians can debate whether God is known by faith, revelation, or church tradition, but they cannot debate whether God is relevant to theology. Outside claimants who deny the conventions of a sub-world, unlike insiders disputing how those conventions are applied, are perceived not merely as mistaken, but as immoral, ignorant or both. Disputes within a sub-world of practice can solve problems for participants, but disputes across sub-worlds tend to devolve into name-calling and ethical recrimination.

From a Jamesian pragmatist perspective, the debate initiated by Zyphur and Pierides is a dispute across sub-worlds rather than a conversation that can be progressed within the sub-world of QR practice. Quantitative social research is a legitimate sub-world of practice, and like the proposals by C. Wright Mills a half-century earlier, the authors’ proposals are unlikely to alter or deter that sub-world. From the perspective of the traditional QR sub-world, the debate initiated by the authors—dropping QR best practices and using QR to achieve researchers’ social aspirations—is not so much a challenge to QR practice as a kind of category mistake. The authors’ paper is not written in the language of quantitative researchers, it does not acknowledge their legitimacy, it does not anticipate or address their potential responses, and it proposes to overthrow the standards of their community without discussing the consequences of abandoning those standards. The authors seem to inhabit a different sub-world altogether, and the two sub-worlds do not have very much to say to each other.

I believe this is a good thing. If traditional QR is a legitimate sub-world of practice, so is the sub-world proposed by the authors. I can imagine a sub-world in which people put QR in the service of solving human problems, with its own conventions, beliefs and vocabularies. This sub-world cannot replace traditional QR or operate within it, but requires an independent code of practice and community of participants. To build this community within the conventions of traditional QR would constrain and diminish both traditional QR and what the authors are trying to accomplish. The authors have started the conversation by articulating a set of principles for purpose-driven QR. Perhaps now they can define the social and intellectual agenda for this community, and build the human and institutional infrastructure required for its growth and development. I support what they are trying to do and I hope they succeed.

Gelman, A. (2015). The connection between varying treatment effects and the crisis of unreplicable research: A Bayesian perspective. Journal of Management, 41 (2), 632–643.

Article   Google Scholar  

Goodman, R. B. (2002). Wittgenstein and William James . Cambridge UK: Cambridge University Press.

Book   Google Scholar  

James, W. (1890a). The principles of psychology (Vol. I). New York: Henry Holt and Company.

Google Scholar  

James, W. (1890b). The principles of psychology (Vol. II). New York: Henry Holt and Company.

James, W. (1902). The varieties of religious experience: A study in human nature . London: Longmans, Green and Co.

James, W. (1907). Pragmatism: A new word for some old ways of thinking . London: Longmans, Green and Co.

Mills, C. Wright. (1959). The sociological imagination . New York: Oxford University Press.

Powell, T. C. (2001a). Competitive advantage: Logical and philosophical considerations. Strategic Management Journal, 22 (9), 875–888.

Powell, T. C. (2001b). Fallibilism and organizational research: The third epistemology. Journal of Management Research, 4, 201–219.

Powell, T. C. (2002). The philosophy of strategy. Strategic Management Journal, 23 (9), 873–880.

Powell, T. C. (2003). Strategy without ontology. Strategic Management Journal, 24 (3), 285–291.

Powell, T. C. (2014a). Strategic management and the person. Strategic Organization, 12 (3), 200–207.

Powell, T. C. (2014b). William James. In Jenny Helin, Tor Hernes, Daniel Hjorth, & Robin Holt (Eds.), The Oxford handbook of process philosophy and organization studies (pp. 166–184). Oxford: Oxford University Press.

Rawls, J. (1971). A theory of justice . Cambridge MA: Belknap.

Schwab, A., Abrahamson, E., Starbuck, W. H., & Fidler, F. (2011). Researchers should make thoughtful assessments instead of null-hypothesis significance tests. Organization Science, 22, 1105–1120.

Simmons, J., Nelson, L., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allow presenting anything as significant. Psychological Science, 22, 1359–1366.

Vul, E., Harris, C., Winkielman, P., & Pashler, H. (2009). Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition. Perspectives on Psychological Science, 4 (3), 274–290.

Zyphur, M. J., & Pierides, D. C. (2017). Is quantitative research ethical? Tools for ethically practicing, evaluating, and using quantitative research. Journal of Business Ethics, 143, 1–16.

Zyphur, M. J., Pierides, D. C., & Roffe, J. (2016). Measurement and statistics in ‘organization science’: Philosophical, sociological, and historical perspectives. In R. Mir, H. Willmott, & M. Greenwood (Eds.), The Routledge companion to philosophy in organization studies (pp. 474–482). Abingdon: Routledge.

Download references

The author received no funding to support this research.

Author information

Authors and affiliations.

Said Business School, University of Oxford, Park End Street, Oxford, OX1 1HP, UK

Thomas C. Powell

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Thomas C. Powell .

Ethics declarations

Conflict of interest.

The author declares having no conflict of interest in this research.

Research Involving in Human and Animal Rights

The author declares that this article does not contain any studies with human participants or animals performed by the author.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Powell, T.C. Can Quantitative Research Solve Social Problems? Pragmatism and the Ethics of Social Research. J Bus Ethics 167 , 41–48 (2020). https://doi.org/10.1007/s10551-019-04196-7

Download citation

Received : 21 December 2017

Accepted : 23 May 2019

Published : 13 June 2019

Issue Date : November 2020

DOI : https://doi.org/10.1007/s10551-019-04196-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Quantitative research

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

News Details

Exploring sustainable development & the human impact of natural disasters

Chenyi Ma teaching

Authored by: Carson Easterly

Photography by: Krista Patton

Faculty & Research

A Q&A with research assistant professor Chenyi Ma

What factors allow people to prepare for and recover from natural disasters?  Dr. Chenyi Ma , a research assistant professor at Penn’s School of Social Policy & Practice (SP2), conducts interdisciplinary research that investigates the role of inequality in disasters’ impact and points to policy solutions. Having first come to SP2 as a  PhD in Social Welfare  student, he now teaches SP2 students while conducting research on disaster risk reduction and sustainable development. 

What drew you to SP2 and Penn? 

Disaster research requires interdisciplinary collaboration, and Penn is the ideal place. I have mentors at SP2, Wharton, and Engineering and access to rich, multidisciplinary academic resources. SP2’s social justice mission and commitment to sustainable development also align with my values.

Twelve years ago, I came to SP2 as a student in the PhD in Social Welfare Program. I continued my research on the human impacts of natural disasters as a post-doctoral student.  Now, as a research assistant professor at SP2, I focus on the social determinants of health and behavioral outcomes in disaster contexts, including public health emergencies like the COVID-19 pandemic. 

Before joining Penn, you worked as a program officer for Education for Sustainable Development at the World Wide Fund for Nature (WWF). How does that background connect to your research and teaching? 

My work at WWF focused on promoting a holistic approach known as Education for Sustainable Development (ESD). With a student-centered learning approach similar to social work education, ESD empowers individuals with the knowledge, skills, values, and attitudes needed to make informed decisions and take responsible actions for environmental integrity, economic viability, and a just society.  ESD also encourages researchers to employ Community-Based Participatory Action Research (CBPAR) — a collaborative research approach that involves community members — to foster both researchers’ and community members’ knowledge and ability to sustainably manage their local natural resources while respecting, and even sometimes using, indigenous culture, knowledge, and social infrastructure. Student-centered teaching and collaborative research continue to be important themes of my work.

How would you define sustainable development?

Sustainable development is about meeting present needs without compromising the ability of future generations to meet theirs. This approach encompasses social, economic, and political dimensions. My current research delves into the social dimension, recognizing that addressing environmental challenges requires collaboration and co-learning among natural and social scientists, professionals, and stakeholders to find solutions. 

You currently research social vulnerability and disaster preparedness, housing and urban resilience, environmental justice, energy policy, and social epidemiology. What drew you to these research interests?

One of the most important components of sustainable development is disaster risk reduction. As a student at Washington University’s Master of Social Work program and SP2’s PhD program, I began to think of questions about the people affected by disaster risk — for instance, who is more likely to suffer from damage as a result of natural disasters? Which survivors of disasters are more susceptible to mental illness? Do existing social policy programs adequately address the needs of disaster victims?

To answer these questions and others, I began to conduct empirical research. For example, using large datasets and GIS mapping, I led a project that examined the severity of home damage caused by Hurricane Maria in Puerto Rico. Homes occupied by renters were four times more likely to have been destroyed than those occupied by homeowners. This is direct evidence that low-income renters are extremely socially vulnerable to housing damage caused by climate-related disasters.  

Through another study, I found there were racial and ethnic disparities in the prevalence of mental illness among Hurricane Sandy survivors in New Jersey and New York City. Such disparities, largely accounted for by different levels of exposure to a disaster, underscore the need for increased provision of social support to more susceptible groups to effectively mitigate these risks.

Road sign partially submerged by flood waters.

What kind of an impact do you hope your work can have on policy in the face of climate inequality?

I hope policymakers might consider public-private partnerships like the National Flood Insurance Program to address private insurance affordability for low-income households who are most vulnerable to housing damage. One of my recent research studies examined how income inequality could influence household consumption behaviors related to disaster preparedness, with a specific focus on private homeowners’ insurance. Observing Hurricane Maria survivors in Puerto Rico, the study found that private homeowners’ insurance — the most important financial tool to mitigate property losses — was unaffordable for low-income households, and income inequality further exacerbated this unaffordability.  

Another of my current studies provides new insight into how public assistance, such as cash transfer welfare programs, can effectively address vulnerable groups who have a high level of risk perception and the intention to prepare for disasters, yet lack the financial resources to do so. The study examines the progress of human behavioral changes for disaster preparedness along three developmental stages, from “not prepared,” to “have the intention to prepare,” and ultimately to “already prepared.” The preliminary findings of this study suggest disparities between Hispanics and non-Hispanic whites. While Hispanics are more likely to have the  intention  to prepare and exhibit higher levels of risk perception than non-Hispanic whites, they are less likely to take concrete actions of preparedness. This is largely due to the  unequal  access to preparedness resources between the two groups.  

You’ve taught the course Quantitative Reasoning and Program Evaluations at SP2. What are some highlights of your work in the classroom?   

The students and their research projects are always the highlight of my time in the classroom. I view my role as a facilitator who works with them to build their research capacity for completing their own projects. One significant component of ESD is learning by doing. My Penn students adopt a “learning by researching” approach to focus on ways in which their research projects can practically address critical issues in their communities, including environmental, health, and political issues. 

What are you looking forward to discovering next?

I am continuing to explore maladaptive responses to climate-related disasters and public health emergencies. My previous research found that natural disaster survivors often exhibit adaptation behaviors, including maladaptive behaviors like increased alcohol use, after a disaster when they lack financial assistance for recovery. For a current project, I am examining household decision-making processes and underlying maladaptive responses to energy insecurity during the pandemic. My hope is to provide new insights into how energy policies can be more responsive to future disasters. 

Chenyi Ma, MSW, PhD

Chenyi Ma, MSW, PhD

Research Assistant Professor

office: 215.746.8976

machenyi@upenn.edu

  • Alzheimer's disease & dementia
  • Arthritis & Rheumatism
  • Attention deficit disorders
  • Autism spectrum disorders
  • Biomedical technology
  • Diseases, Conditions, Syndromes
  • Endocrinology & Metabolism
  • Gastroenterology
  • Gerontology & Geriatrics
  • Health informatics
  • Inflammatory disorders
  • Medical economics
  • Medical research
  • Medications
  • Neuroscience
  • Obstetrics & gynaecology
  • Oncology & Cancer
  • Ophthalmology
  • Overweight & Obesity
  • Parkinson's & Movement disorders
  • Psychology & Psychiatry
  • Radiology & Imaging
  • Sleep disorders
  • Sports medicine & Kinesiology
  • Vaccination
  • Breast cancer
  • Cardiovascular disease
  • Chronic obstructive pulmonary disease
  • Colon cancer
  • Coronary artery disease
  • Heart attack
  • Heart disease
  • High blood pressure
  • Kidney disease
  • Lung cancer
  • Multiple sclerosis
  • Myocardial infarction
  • Ovarian cancer
  • Post traumatic stress disorder
  • Rheumatoid arthritis
  • Schizophrenia
  • Skin cancer
  • Type 2 diabetes
  • Full List »

share this!

April 29, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

Quantitative study assesses how gender and race impact young athletes' perceptions of their coaches

by University of North Carolina at Greensboro

coach

Across the U.S., there are over 8 million student-athletes in high school and college. Engaging in sports can contribute to physical, mental, and social benefits, and coaches can play a key role in student-athletes' continued participation in sports.

A study led by UNC Greensboro's Dr. Tsz Lun (Alan) Chu, published in Sport, Exercise, and Performance Psychology, examines how multiple aspects of a young athlete's identity, including gender and race, may relate to their perceptions of their coaches and mental health .

"There have been quite a few studies on this topic looking at gender and cultural differences from a qualitative standpoint, but they have not looked at the combination of these factors using a quantitative approach, which is crucial," says Chu, who conducts research in applied sport psychology.

His recent study takes steps to fill that gap by surveying 846 athletes, about half in high school and half in college, with the first analysis assessing two gender categories—male and female—and three race/ethnicity categories—White, Hispanic/Latino, and Black.

"About one-third of young athletes drop out within a year of participation, so it's really important we understand how coaches can support them," Chu says.

Athletes were asked to rate the degree to which their coach created a supportive or unsupportive environment, including their coach's controlling, empowering, and inclusive behavior. The researchers also asked athletes about the degree to which they felt their psychological needs, including a sense of autonomy, competence, and relatedness, were met in the sporting environment.

"Psychological needs are the factors that make you feel satisfied mentally in your life and are the things that every person would need in order to feel motivated and do their best," says Chu, who is a certified mental performance consultant and an associate professor in the UNCG Kinesiology Department.

In their preliminary study, the authors were surprised to find that, as a group, Black female athletes reported the most positive perceptions of the coaching climate and satisfaction with their psychological needs compared to other races by gender subgroups.

"When individuals have more than one marginalized identity, they tend to feel isolated and less supported," Chu says "So, it was surprising that Black females had the most positive perceptions of their sports environments, which were mostly male-dominated spaces in this study. We're interested to see if these findings hold in a larger sample involving more diverse schools."

Consistent with past literature, the authors found that Black male athletes perceived more disempowering coaching climates compared to other races by gender subgroups. In light of these findings, Chu suggests coaches take a nuanced approach to ensure athletes from all backgrounds feel supported in sports.

"Even though your coaching approach may work for 80 percent of your athletes, it doesn't mean you should just stick with that approach," he said. "There may be some athletes that need a different method, and you have to adapt."

In future studies, Chu plans to explore how athletes from more backgrounds, including Asian, Indigenous and Native American athletes, perceive their coaches. He also hopes to examine how a coach's identity may relate to player's perceptions of them and impact the athlete-coach relationship.

Explore further

Feedback to editors

the impact of quantitative research in social work

Researchers reveal a new approach for treating degenerative diseases

12 minutes ago

the impact of quantitative research in social work

Early gestational diabetes treatment shown to reduce birth complications, health costs for those at higher risk

52 minutes ago

the impact of quantitative research in social work

Omega-6 fatty acids could cut risk of bipolar disorder, study suggests

58 minutes ago

the impact of quantitative research in social work

Study finds Bcl6 protein is an important transcription factor for formation of certain dendritic cells

the impact of quantitative research in social work

Development in cancer treatment focuses on re-educating cells to combat resistance

the impact of quantitative research in social work

All women need mammograms beginning at age 40, expert panel says

the impact of quantitative research in social work

Staying fit boosts kids' mental health

the impact of quantitative research in social work

A step forward for self-health monitoring—wireless charging through a magnetic connection

the impact of quantitative research in social work

Regulating cholesterol levels might be the key to improving cancer treatment

the impact of quantitative research in social work

Evidence-based integrated approaches provide new opportunities to improve complex pain management

2 hours ago

Related Stories

the impact of quantitative research in social work

Coaches can boost athletes' mental health by being 'authentic leaders'

Mar 5, 2024

the impact of quantitative research in social work

Athletes don't benefit from relying on a coach for too long

Oct 13, 2020

the impact of quantitative research in social work

Study shows sex could be a better predictor of sports performance than gender identity

Dec 21, 2023

the impact of quantitative research in social work

Suicides among US college student athletes have doubled over past 20 years: Study

Apr 4, 2024

the impact of quantitative research in social work

How Black male college athletes deal with anti-Black stereotypes on campus

Jan 31, 2024

the impact of quantitative research in social work

Study shows how Black college athletes alter self-presentation to avoid biases

Dec 13, 2023

Recommended for you

the impact of quantitative research in social work

What goal-directed learning is and why it's important for adolescents to learn from their actions

the impact of quantitative research in social work

Brain function of older adults catching up with younger generations, finds study

Apr 29, 2024

the impact of quantitative research in social work

Nature's nudge: Study shows green views lead to healthier food choices

the impact of quantitative research in social work

Researchers look at genetic clues to depression in more than 14,000 people

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Medical Xpress in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

  • Open access
  • Published: 23 April 2024

A mixed methods evaluation of the impact of ECHO ® telementoring model for capacity building of community health workers in India

  • Rajmohan Panda 1 ,
  • Supriya Lahoti   ORCID: orcid.org/0000-0001-6826-5273 2 ,
  • Nivedita Mishra 2 ,
  • Rajath R. Prabhu 3 ,
  • Kalpana Singh 4 ,
  • Apoorva Karan Rai 2 &
  • Kumud Rai 2  

Human Resources for Health volume  22 , Article number:  26 ( 2024 ) Cite this article

143 Accesses

1 Altmetric

Metrics details

Introduction

India has the largest cohort of community health workers with one million Accredited Social Health Activists (ASHAs). ASHAs play vital role in providing health education and promoting accessible health care services in the community. Despite their potential to improve the health status of people, they remain largely underutilized because of their limited knowledge and skills. Considering this gap, Extension for Community Healthcare Outcomes (ECHO) ® India, in collaboration with the National Health System Resource Centre (NHSRC), implemented a 15-h (over 6 months) refresher training for ASHAs using a telementoring interface. The present study intends to assess the impact of the training program for improving the knowledge and skills of ASHA workers.

We conducted a pre–post quasi-experimental study using a convergent parallel mixed-method approach. The quantitative survey ( n  = 490) assessed learning competence, performance, and satisfaction of the ASHAs. In addition to the above, in-depth interviews with ASHAs ( n  = 12) and key informant interviews with other stakeholders ( n  = 9) examined the experience and practical applications of the training. Inferences from the quantitative and qualitative approaches were integrated during the reporting stage and presented using an adapted Moore’s Expanded Outcomes Framework.

There was a statistically significant improvement in learning ( p =  0.038) and competence ( p =  0.01) after attending the training. Participants were satisfied with the opportunity provided by the teleECHO™ sessions to upgrade their knowledge. However, internet connectivity, duration and number of participants in the sessions were identified as areas that needed improvement for future training programs. An improvement in confidence to communicate more effectively with the community was reported. Positive changes in the attitudes of ASHAs towards patient and community members were also reported after attending the training. The peer-to-peer learning through case-based discussion approach helped ensure that the training was relevant to the needs and work of the ASHAs.

Conclusions

The ECHO Model ™ was found effective in improving and updating the knowledge and skills of ASHAs across different geographies in India. Efforts directed towards knowledge upgradation of ASHAs are crucial for strengthening the health system at the community level. The findings of this study can be used to guide future training programs.

Trial registration The study has been registered at the Clinical Trials Registry, India (CTRI/2021/10/037189) dated 08/10/2021.

Peer Review reports

The Alma Ata Declaration of 1978 has recognized primary health care as an essential element for improving community health. Community health workers (CHWs) have the potential to complement an overstrained health workforce and enhance primary healthcare access and quality [ 1 ]. Low- and middle-income countries (LMICs) face a triple burden of low density of doctors and nurse-midwives, low government expenditure on health, and disproportionately larger poor health outcomes [ 2 ]. The roles and responsibilities of CHWs vary across LMICs [ 3 ]. A systematic review has documented that the socio-cultural, economic, health system, and political context in which CHW interventions operate in LMICs influence the implementation and success of interventions [ 4 ].

The National Rural Health Mission (NRHM), India introduced Accredited Social Health Activists (ASHAs) as female CHWs in 2005. The ASHAs are women volunteers selected from the local village and were initially conceptualized with a vision to improve maternal and child health in the country; however, over time, they are now involved in different national health programmes [ 5 , 6 ]. Despite their potential to contribute to preventive and promotive healthcare, they remain largely underutilized because of their limited knowledge and skills [ 1 ]. The World Health Organisation (WHO) has suggested ‘regular training and supervision’ for CHWs to fulfil their role successfully [ 7 ]. In India, the health system lacks methods for continuous education and routine upgradation of the ASHA’s skills [ 8 , 9 , 10 ].

In LMICs, digital training programs can help expand the reach of training to large numbers of healthcare workers at a low cost without interfering with the delivery of routine healthcare services [ 11 , 12 ]. An evidence-mapping study of 88 studies that used technology for training CHWs in LMICs found that the focus of trainings was maternal and child health and other high-burden diseases were neglected [ 13 ]. In India, studies evaluating digital trainings for CHWs have focussed on specific diseases or have been limited to specific states in the last decade [ 10 , 14 ]. This study was conducted across multiple states. More such studies with larger sample size are needed on the evaluation of such training initiatives in India [ 13 , 15 , 16 ].

Project Extension for Community Healthcare Outcomes (ECHO) presents an educational opportunity for capacity-building through a telementoring platform that uses video conferencing to create a continuous loop of learning and peer support. The sessions are facilitated by didactic presentation and case-based learning that allows problem-solving through shared best practices [ 17 ]. ECHO India, in collaboration with National Health System Resource Centre (NHSRC), provided refresher training for ASHAs [ 18 ]. There is increasing evidence of the positive effect of ECHO training on medical provider’s learning and self-efficacy. However, its value as a training platform to CHWs in LMICs is limited. Previous studies that evaluated the use of the ECHO Model ™ for CHWs focussed on specific diseases and were conducted in high-income countries (HICs) [ 19 , 20 , 21 ]. For the adoption of digital technology, CHWs in LMICs encounter challenges such as poor proficiency levels in accessing and using digital platforms, limited access to troubleshooting, poor internet connectivity, and in-house support for resolving issues [ 22 ]. The present study was designed to assess the impact of the ECHO telementoring model for improving the knowledge and skills of ASHA workers in delivering comprehensive health services. This will provide new insights for measuring outcomes of digital training programs for CHWs (ASHA workers).

Study design

A pre–post quasi-experimental design using a convergent parallel mixed-method approach [ 23 ] was employed. The quantitative and qualitative data were collected concurrently. Inferences from both approaches were integrated during the reporting stage. This allowed for a comprehensive understanding of the effect of training on the knowledge and skills of ASHAs.

The ECHO training intervention and curriculum

Project ECHO ® designed a 15-h (over 6 months from October 2021 to March 2022), virtual, refresher training program to enhance the capacity of ASHAs to deliver counselling services for comprehensive healthcare in four states ( n  = 2293). Each session lasted for 90 min. The ECHO NHSRC training used a “hub and spoke” structure in which a multidisciplinary team of experts (trainers) based at a regional academic medical centre (the “hub”) engaged with the ASHAs (the “spokes”) [ 24 ] who attended the sessions from dedicated learning sites (PHCs). Each site also had a coordinator who would help facilitate the discussions and questions. The training curriculum was developed based on the NHSRC ‘ASHA training modules’ [ 18 ] in the regional languages in consultation with partners (hub-leaders and trainers). It comprised 10 sessions covering a range of topics, such as maternal health, new-born care, child health, nutrition, reproductive health, violence against women, tuberculosis, vector-borne diseases, non-communicable diseases, COVID-19, palliative care, and mental health. The training presentations included text with visual learning methods, such as images, videos, and links to training resources.

Study settings

The evaluation study was conducted in four states of India, where training sessions were held. These states represented the four geographical regions—northern (Himachal Pradesh) ( n =  499), southern (Tamil Nadu) ( n =  500), eastern (West Bengal) ( n =  618), and north-eastern (Sikkim) ( n =  676). The intervention (training sessions) was completed in March 2022. The end-point data were collected from March 2022 to May 2022.

Study participants and recruitment

Simple random sampling was used to select the ASHAs from each state for the quantitative survey. The participants were recruited from a list of ASHAs who would be receiving the ECHO NHSRC training. To be included, ASHAs had to be enrolled in the refresher training, planning to continue working for the next 10 months, with available contact details and consenting voluntarily. The ASHAs were contacted through mobile phones in each state. Key informant interviews (KIIs) were conducted with hub leaders who were involved in implementing the training, trainers (faculty) who delivered the lectures, and in-depth interviews (IDIs) with ASHAs.

Sample size

The sample size for the quantitative study was estimated by assuming a 25% improvement in knowledge and skills, 80% power, and a design effect factor of 1.7%. An adjustment of 30% loss to follow up and 20% non-response (from previous experience) led to a sample of 591 participants across four states, i.e., 148 participants from each state. For the qualitative study, purposive sampling with maximum variation across age, education, practice sites, and years of work experience was used for the selection of the participants. A total of 12 IDIs were conducted with ASHAs and nine KIIs with stakeholders (Additional file 2 : Appendix S2).

Study tools and data collection

For quantitative data collection, a structured questionnaire was designed through a collaborative approach with the research and program implementation team. The knowledge of ASHAs was assessed by a combination of 18 technical questions and case vignettes. Learning and competence, performance, and satisfaction were assessed with a 5-point Likert scale, using 1 = Strongly Disagree; 2 = Disagree; 3 = Neither Agree nor Disagree; 4 = Agree; and 5 = Strongly Agree. The face validity of the questionnaire was tested with ten ASHAs, separate from those recruited in the study and five primary care experts. The changes related to language, clarity, and relevance were made in the questionnaire based on the feedback from experts and participants. Separate discussion guides were developed for KIIs with trainers (Additional file 3 : Appendix S3) and hub-leaders (Additional file 4 : Appendix S4) and IDIs with ASHAs (Additional file 5 : Appendix S5). The guide focussed on examining the experience and practical applications of the training and was field tested before being administered in the main study. All study tools were translated into the local languages of the states and back-translated to check discrepancies.

The data were collected on the cell phone by experienced and trained researchers from social sciences backgrounds. Due to telephonic data collection, we were unable to capture non-verbal interview data such as emotions or gestures, particularly important in qualitative data. This may affect the richness of data and interpretation of responses. The quantitative tool was designed in the CS Pro software (version 7.5) and data were collected using its smartphone application. The qualitative interviews lasted around 40–50 min and were audio recorded. All interviews were translated and transcribed verbatim.

Data analysis

We summarized the quantitative data using descriptive statistics. Continuous variables were summarized using mean ± SD, and categorical variables were summarized using percentages and frequencies. The responses recorded using the 5-point Likert scale were recategorized during the analysis into three categories, i.e., ‘agree’ (combining strongly agree and agree), ‘disagree’ (combining strongly disagree and disagree), and ‘neutral [ 25 ]. Paired t test was used to find the difference between the pre- and post-scores of learning and competence and the attitude of participants toward ECHO training. McNemar’s test was used to assess changes in pre- and post-test scores for the technical domain. A p value of less than 0.05 was considered significant. STATA 16.0 statistical software was used for the analysis.

Qualitative data were analyzed according to the principles of the Framework approach [ 26 ], which combines inductive and deductive approaches. As a first step, two authors (SL and NM) familiarized themselves with four randomly selected transcripts and independently coded them using initial codes that were developed based on Moore’s framework levels of participation, satisfaction, learning, competence, and performance [ 27 ]. New codes that emerged while undertaking the analysis were included. The discussion and comparison of the double-coded transcripts enabled the development of an agreed set of codes. Any disagreements were discussed and resolved with the help of the third author (RP) to achieve inter-coder agreement. A final codebook was developed and applied to all the transcripts. The codes were combined and categorized into key emerging themes., The themes, including quotes (respondents’ exact words), were included to represent the main findings. Atlas.ti (version 8) software was used for data analysis.

Moore’s level 1—participation

Table 1 represents the baseline demographics of the recruited participants. From 610 participants who completed the pre-training survey, 490 participants completed the post-training survey, resulting in a follow-up rate of 80% (95% CI 76.6, 83.1). A total of 120 (20%, 95% CI 16.8, 23.3) participants were lost to follow up. This was due to a) contact numbers not being operational ( n =  96) and b) refusal due to time considerations ( n =  24). The field investigators attempted three additional phone calls, coordinated with hubs for participants’ alternate contact information, and offered flexible phone appointments to ensure maximum participation in the post-training survey. The majority (68%) of ASHAs were posted at sub-centres. A sub-centre is the most peripheral unit of contact of the health system with the community [ 28 ]. The majority of the participants (75%) had completed their high school (10th) education.

A hub leader described the efforts made by the ECHO to facilitate the participation of the ASHAs in the training.

“ECHO provided a facility where everyone can gather at the nearest block for the training. Physical and online modes [are] both available” (Hub-leader, Himachal Pradesh).

Moore’s level 2—satisfaction

The end-point survey assessed participants’ satisfaction with the ECHO training. The survey included eight items that measured overall training satisfaction and five items that measured satisfaction with factors specific to the telementoring model using close-ended questions. Satisfaction with the training content and environment was measured with four items. Except for one topic area (sharing of additional resources and training material), over 90% of participants were satisfied with almost all of the different components of the ECHO telementoring intervention (Additional file 1 : Appendix S1, Tables S1.1, S1.2, S1.3). While participants found the overall intervention favourable, 54.5% of all participants were dissatisfied with internet connectivity in the training sessions. Around one fourth of the participants faced challenges with the duration (31.2%), frequency (31.2%), and number of participants (28.4%) in the sessions (Additional file 1 : Appendix S1: Table S1.3).

The qualitative findings also show that most of the trainees were satisfied with the learning opportunity provided by the ECHO training.

“After attending these ECHO sessions, I felt we are constantly learning new techniques and it’s a deep sense of satisfaction” (ASHA, Tamil Nadu).

The ASHAs also shared areas or features of the ECHO model that did not meet their requirements and need improvement. They felt that the duration allotted for a session was not sufficient and some topics were covered very fast.

“They rush a lot while teaching over phone. It will be more helpful if they take more time and explain the things in a more detailed manner” (ASHA, WB)

Another ASHA suggested increasing the duration of training to improve their understanding of some topics.

"Increase the time of the training. Topics can be made deeper, and richer for better explanations" (ASHA, Tamil Nadu)

ASHAs described challenges related to connectivity while attending the training.

“The network connection was a problem and video used to lag” (ASHA, Sikkim)

Trainers shared their opinion about aspects of online trainings that did not meet their expectations.

“The problem is that they only join the meeting [online training] and do their own work, they actually do not listen properly.” (Trainer, WB)

A trainer mentioned that the large number of participants in some sessions affected the interaction among participant ASHAs.

“Sometimes a session has too many participants causing coordination efforts to be a challenge in these sessions” (Trainer, TN)

Difficulties in reaching the PHCs were recorded from the state of Sikkim. The geographical location and lack of transport facilities were mentioned by a trainer.

“We have transportation problem, our ASHA comes from rural area and it’s difficult to get taxi, which makes [it] harder to attend classes” (Trainer, Sikkim)

Many participants regarded organizational support as a facilitator for attending the training program. An ASHA from Tamil Nadu described how the issue of distance was resolved through management interventions from the organization.

“Our Block is 30 km away. There is another Block nearby that is 1 km only from here, they sent us there… so there was no problem” (ASHA, TN)

Moore’s level 3—learning

McNemar’s Chi-square statistic showed a significant difference between pre-ECHO and post-ECHO proportions in various aspects of health-related technical knowledge. Before the training, 1% of participants were aware of the correct schedule to be followed in the first week after the delivery of a child, which increased to 40% of participants post-training (p < 0.001). Overall, a statistically significant increase of 6% (95% CI 0.0003, 0.12; p =  0.038) in participants’ technical knowledge after ECHO training was found. After the training, a 7% increase in knowledge of malaria ( p =  0.002) and its symptoms and a 9% increase in knowledge of the right action to be undertaken (p < 0.001) was reported. Knowledge related to some areas such as recommended duration of physical activity or exercise (p < 0.001), immunisation after child birth ( p =  0.001), family planning in women after child birth ( p =  0.002) showed a decrease after attending the training (Additional file 1 : Appendix S1, Table S2). Post ECHO training, ASHAs reported an improvement in their knowledge of using a smartphone (switch on and off, and navigate) ( p =  0.0005) and navigating a mobile application ( p =  0.59). The ASHAs reported a 2% decrease in their knowledge of downloading content in the mobile ( p =  0.07) (Fig.  1 ).

figure 1

Self-rated ICT knowledge of ASHAs

The qualitative data show that ASHAs who did not have a smartphone found it difficult to download and save content. One of the participants reported receiving additional training content in the form of a pdf file. She also mentioned that those who do not use a smartphone find it challenging to access this additional resource.

“We get the study material in a pdf so that simplifies our work further. But those who do not have a smartphone, find it difficult to get this opportunity” (ASHA, WB)

3A—Declarative learning

Declarative learning assesses how participants articulate the knowledge that the educational activity intended them to know (knowing what). The qualitative findings show that the training had increased the ASHA’s knowledge in specific domains such as breastfeeding during COVID-19.

“The doubt was whether a mother can breastfeed the baby when suffering from COVID-19. I got clarity about that… many such topics were cleared” (ASHA, Himachal Pradesh)

3B—Procedural learning

Procedural learning assesses the participants' articulation of how to do what the educational activity intended them to know (knowing how).

Participants reported that they had gained new skills related to the approach and identification of healthcare issues after attending the ECHO training.

“Earlier we wouldn’t know if ear related issues had a resolution – But following the ear related training we are aware that such issues can be cured or have treatments” (ASHA, Tamil Nadu).

The qualitative interviews revealed additional themes that described the value of the ECHO training program in improving the learning experience of ASHAs.

ASHA workers felt that the case presentations from their peers enhanced their learning experience.

“One ASHA shared a case of an anaemic mother. Based on this case we learned that this could have been prevented if iron tablets are provided from the adolescent stage” (ASHA, Tamil Nadu).

The interactive nature of the sessions and the discussions benefitted the learning experience of the ASHAs.

“Open discussion helped us so much. We can discuss any topics if we haven’t understood and sir used to explain again” (ASHA, Sikkim)

Moore’s level 4—competence

The participants reported significant improvement in their confidence to identify and manage several health conditions like birth asphyxia (for home deliveries) and management with a mucus extractor ( p =  0.01), screen and refer pregnant women ( p =  0.01), disseminate information on domestic violence and sexual harassment ( p =  0.001). Overall, a statistically significant increase of 6% (95% CI 0.01, 0.10; p =  0.01) in participants’ competence after attending the ECHO training was found. Participants reported a decrease in their confidence to track child immunisation ( p  < 0.001), monitor symptoms of COVID (p < 0.001), and clarify concerns of the community ( p  < 0.001) after attending the training (Additional file 1 : Appendix S1, Table S3).

Participants mentioned an improvement in their confidence while communicating with patients and their families.

“Initially we could not talk to people so comfortably, we hesitated at times but after being trained we can talk to people and their families properly and easily now” (ASHA, West Bengal)

An ASHA described a gap in their ability to talk to mothers in the field and suggested including more training content on efficient communication skills.

“We go on field and talk to mothers. There was no training for these, but I feel it will be good if we can have training on how to talk to mothers comfortably” (ASHA, WB)

Moore’s level 5—performance

The study identified a significant improvement in ASHAs’ positive attitude toward maternal and child health issues. Overall, a 5% improvement (95% CI − 0.009, 0.10; p value = 0.09) in participants’ attitudes post-ECHO training was found. Almost all the participants (99%) reported applying the skills learnt during the training at their workplaces. More than 90% of the participants felt that the ECHO training expanded access to healthcare in their community (Fig.  2 ). The ASHAs reported an improvement in their attitude towards inclusion of HIV patients in the community ( p =  0.01) and home visits for new born babies (p < 0.001) (Additional file 1 : Appendix S1, Table S4).

figure 2

Self-reported performance of ASHAs

The ASHAs shared specific examples where they made changes in their practice or treatment strategies after attending the training.

“[Earlier] the implementation was not proper [correct]. As an example, if a child’s life has to be saved on the spot, we would take the medicines and syringes separately. Now we take the necessary items section wise including the AFI kit. So that’s the change” (ASHA, Tamil Nadu).

The results of this evaluation suggest that Project ECHO provides a suitable and efficacious platform for training for ASHAs. The participants reported an improvement in their knowledge, skills, and practices. They also described improved confidence to communicate more effectively. Some areas in which the ASHAs reported a lack of knowledge and confidence include newborn immunisation and family planning after pregnancy.

The NRHM guidelines for the recruitment of ASHAs require candidates to have at least eight or 10 completed years of formal education. Low literacy and inadequate training of ASHAs have been observed in different states in India [ 30 , 31 ]. However, with the proper training and support, ASHAs can provide comprehensive preventive and promotive healthcare services [ 29 ]. In this study, the majority (75%) of ASHAs across all states had ten or more years of schooling. The ECHO training will bolster their knowledge, skills, and confidence in providing effective services.

The ASHAs receive 23 days of training in the first year, followed by 12 days of training in every subsequent year to keep them updated with the knowledge and skills needed to effectively perform their roles and responsibilities. Previous studies have identified many challenges in the training of ASHAs, such as lack of regular refresher training [ 32 ], shortage of competent trainers, insufficient funds, and use of obsolete health information [ 33 ]. The training programs have mostly been didactic-based and had limitations in the engagement of participants [ 34 ]. The ECHO NHSRC refresher training addresses these limitations by promoting peer-to-peer learning and through a case-based discussion approach [ 35 ].

Our findings report a significant increase in the knowledge of ASHA workers with respect to specific domains like maternal and child health. A randomized controlled trial in Karnataka, India, found a significant improvement in mental health knowledge, attitude, and practice (KAP) scores amongst ASHAs trained by a hybrid training (traditional 1-day in-person classroom training and seven online sessions using the ECHO Model) against conventional classroom training [ 14 ]. This study findings highlight the improvement in knowledge of ASHAs related to oral health and palliative care post-ECHO training. An improvement in knowledge has also been observed in other studies that have evaluated ECHO telementoring interventions in cancer screening [ 36 ], palliative care [ 37 , 38 ], HIV [ 39 ], and chronic pain [ 40 ] In this study, ASHAs reported poor knowledge of the immunisation schedule for a newborn as well as the confidence to record and track immunisation in the community even after the ECHO training. A critical function of ASHAs is to assist ANMs or nurses with all immunisation activities [ 41 ]. A previous study in Karnataka in 2020 found inadequate knowledge among ASHAs about child immunisation. The above study also documented that by increasing the number of days and focusing on child care the ASHAs had a better understanding of interventions related to child healthcare [ 42 ]. As a part of the course structure, ECHO provides one session on new born and post-partum care. An assessment of the number of sessions needed to cover the topics was beyond the scope of our research but would be beneficial.

Previous studies have identified several shortcomings in ASHAs' communication and counselling abilities [ 43 , 44 , 45 ]. The findings of this study revealed that the ASHAs faced communication issues while discussing health matters related to family planning and COVID-19 with the community. Previous research has found that interpersonal communication of ASHAs are influenced by factors such as health system support and community context [ 46 ]. A study exploring the perspectives of ASHAs on a mobile training course in India also found that they encountered barriers in their interactions with beneficiaries such as resistance from family members, fear of poor quality of care, and financial costs of care [ 44 ]. Training programs must therefore, also incorporate how ASHAs can navigate social behaviours and norms to improve the impact of counselling [ 47 , 48 ]. The extent to which the ECHO training can identify and incorporate community hierarchies to improve communication of the ASHAs needs further exploration. In this study, large batch size ( n =  40) and limited use of video by participants during the training hampered the engagement between ASHAs as well as with the trainers. A previous study in the USA suggested that limiting batch size and ensuring face-to-face interactions on the virtual platform ensured a higher level of accountability and made it easier to engage with others in the ECHO training sessions [ 49 ].

CHWs face significant barriers when using digital technology in LMICs, making it challenging for them to access training on digital platforms [ 50 ]. The ASHAs in this study reported an improvement in their ability to use smartphones and navigate mobile applications. Our findings also suggest that ASHAs should be better oriented for accessing content on hand held devices.

The mentorship by trainers added value to participants’ knowledge and helped improve their skills. In this study, participants’ attitudes towards their work changed after attending the ECHO training suggesting that the learning and confidence developed during the training would be transferable to their work in healthcare settings and communities. The ECHO participants of previous studies have also demonstrated similar changes in their practices [ 35 , 40 ]. Our study findings indicate that the ECHO Model is an effective platform that can help foster a virtual community of practice through case-based learning, shared best practices, and online mentorship by experts.

Future directions

There should be more sessions on topics related to post-natal and newborn care as the ASHAs showed poor knowledge and competence in these areas.

There should be more training on counselling and development of communication skills for ASHAs, specially for maternal and child health and COVID-19.

An orientation for ASHAs should be conducted to facilitate the use of technology and the platform for learning. This may also help overcome some of the challenges described by the ASHAs in this study.

Strengths and limitations

The study used a rigorous quasi-experimental design across four different states of India. Our follow-up rate in the study was 80%, indicating a high response from participants completing the pre–post assessment. The presented study has certain limitations. It was not possible to use randomisation and a pure experimental design in this study, and this affects the internal validity of the study. The inclusion of a control group would have strengthened study validity. The self-reported outcomes can be subject to social desirability bias. We did not document the information on attendance and drop outs from the training program. The qualitative results have to be carefully interpreted because of the small sample size of the qualitative study relative to the study sample.

There is increasing recognition of the importance of CHWs globally for promoting a continuum of care and expanding access to health services. ASHA workers constitute critical human resources in the Indian health system and efforts to empower them are crucial for strengthening the health system at the community level. The encouraging results of this study indicate the effectiveness of Project ECHO in building the capacity of ASHA workers across different geographies in the country.

Availability of data and materials

All data generated or analyzed during this study are included in this published article (as Additional files).

Abbreviations

Community health workers

Sustainable development goals

National Rural Health Mission

Accredited social health activists

Digital infrastructure knowledge sharing

Ministry of Human Resource Development

Coronavirus Disease 2019

National Health System Resource Centre

World Health Organization

High-Income Countries

LMICs: Low- and Middle-Income Countries

Extension for Community Healthcare Outcomes

In-depth interviews

Key informant interviews

Continuing medical education

Institutional Ethics Committee

Participant Information Sheet

Jodhpur School of Public Health

Public Health Foundation of India

Hartzler AL, Tuzzio L, Hsu C, Wagner EH. Roles and functions of community health workers in primary care. Ann Fam Med. 2018;16(3):240–5.

Article   PubMed   PubMed Central   Google Scholar  

Olaniran A, Banke-Thomas A, Bar-Zeev S, Madaj B. Not knowing enough, not having enough, not feeling wanted: challenges of community health workers providing maternal and newborn services in Africa and Asia. PLoS ONE. 2022;17(9): e0274110.

Article   CAS   PubMed   PubMed Central   Google Scholar  

O’Donovan J, O’Donovan C, Kuhn I, Sachs SE, Winters N. Ongoing training of community health workers in low-income and middle-income countries: a systematic scoping review of the literature. BMJ Open. 2018;8(4): e021467.

Kok MC, Kane SS, Tulloch O, Ormel H, Theobald S, Dieleman M, et al. How does context influence performance of community health workers in low- and middle-income countries? Evidence from the literature. Health Res Policy Syst. 2015;13(1):13.

Saprii L, Richards E, Kokho P, Theobald S. Community health workers in rural India: analysing the opportunities and challenges accredited social health activists (ASHAs) face in realising their multiple roles. Hum Resour Health. 2015;13(1):95.

Ministry of Health and Family Welfare, Government of India. Non Communicable Disease Control Programmes: National Health Mission. 2023. https://nhm.gov.in/index1.php?lang=1&level=1&sublinkid=1041&lid=614 . Accessed 13 Feb 2023.

World Health Organization. What do we know about community health workers? A systematic review of existing reviews. 2020. https://www.who.int/publications-detail-redirect/what-do-we-know-about-community-health-workers-a-systematic-review-of-existing-reviews . Accessed 13 Feb 2023.

Bajpai N, Dholakia RH. Improving the performance of accredited social health activists in India. Mumbai: Columbia Global Centres South Asia; 2011.

Google Scholar  

Panwar DS, Naidu V, Das E, Verma S, Khan AA. Strengthening support mechanisms for accredited social health activists in order to improve home-based newborn care in Uttar Pradesh, India. BMC Proc. 2012;6(5):O33.

Article   PubMed Central   Google Scholar  

Yadav D, Singh P, Montague K, Kumar V, Sood D, Balaam M, Sharma D, Duggal M, Bartindale T, Varghese D, Olivier P. Sangoshthi: Empowering community health workers through peer learning in rural india. In: Proceedings of the 26th International Conference on World Wide Web 2017 Apr 3 (pp. 499–508).

Labrique AB, Wadhwani C, Williams KA, Lamptey P, Hesp C, Luk R, et al. Best practices in scaling digital health in low and middle income countries. Glob Health. 2018;14(1):103.

Article   Google Scholar  

Bashingwa JJH, Shah N, Mohan D, Scott K, Chamberlain S, Mulder N, et al. Examining the reach and exposure of a mobile phone-based training programme for frontline health workers (ASHAs) in 13 states across India. BMJ Glob Health. 2021;6(Suppl 5): e005299.

Winters N, Langer L, Nduku P, Robson J, O’Donovan J, Maulik P, et al. Using mobile technologies to support the training of community health workers in low-income and middle-income countries: mapping the evidence. BMJ Glob Health. 2019;4(4): e001421.

Nirisha PL, Malathesh BC, Kulal N, Harshithaa NR, Ibrahim FA, Suhas S, et al. Impact of technology driven mental health task-shifting for accredited social health activists (ASHAs): results from a randomised controlled trial of two methods of training. Commun Ment Health J. 2023;59(1):175–84.

Long LA, Pariyo G, Kallander K. Digital technologies for health workforce development in low- and middle-income countries: a scoping review. Glob Health Sci Pract. 2018;6(Supplement 1):S41–8.

Tyagi V, Khan A, Siddiqui S, KakraAbhilashi M, Dhurve P, Tugnawat D, et al. Development of a digital program for training community health workers in the detection and referral of schizophrenia in rural India. Psychiatr Q. 2023;94(2):141–63.

Article   PubMed   Google Scholar  

Arora S, Thornton K, Murata G, Deming P, Kalishman S, Dion D, et al. Outcomes of treatment for Hepatitis C virus infection by primary care providers. N Engl J Med. 2011;364(23):2199–207.

Article   CAS   PubMed   Google Scholar  

Ministry of Health and Family Welfare, Government of India. ASHA Training Modules: National Health Mission. 2022. http://nhm.gov.in/index1.php?lang=1&level=3&sublinkid=184&lid=257 . Accessed 11 Aug 2022.

Zurawski A, Komaromy M, Ceballos V, McAuley C, Arora S. Project ECHO brings innovation to community health worker training and support. J Health Care Poor Underserved. 2016;27(4):53–61.

Komaromy M, Ceballos V, Zurawski A, Bodenheimer T, Thom DH, Arora S. Extension for community healthcare outcomes (ECHO): a new model for community health worker training and support. J Public Health Policy. 2018;39(2):203–16.

Damian AJ, Robinson S, Manzoor F, Lamb M, Rojas A, Porto A, et al. A mixed methods evaluation of the feasibility, acceptability, and impact of a pilot project ECHO for community health workers (CHWs). Pilot Feasibility Stud. 2020;6(1):132.

Feroz AS, Khoja A, Saleem S. Equipping community health workers with digital tools for pandemic response in LMICs. Arch Public Health. 2021;79(1):1.

Creswell JW, Clark VLP. Designing and conducting mixed methods research. Thousand Oaks: SAGE Publications; 2017. p. 521.

Tran L, Feldman R, Riley T III, Jung J. Association of the extension for community healthcare outcomes project with use of direct-acting antiviral treatment among US adults with hepatitis C. JAMA Netw Open. 2021;4(7): e2115523.

Harpe SE. How to analyze Likert and other rating scale data. Curr Pharm Teach Learn. 2015;7(6):836–50.

Hackett A, Strickland K. Using the framework approach to analyse qualitative data: a worked example. Nurse Res. 2018;26(2):8.

Moore DEJ, Green JS, Gallis HA. Achieving desired results and improved outcomes: Integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29(1):1.

Chokshi M, Patil B, Khanna R, Neogi SB, Sharma J, Paul VK, et al. Health systems in India. J Perinatol. 2016;36(Suppl 3):S9-12.

National health systems resource centre. ASHA: which way forward. Evaluation of the ASHA Programme. 2010. https://nhm.gov.in/images/pdf/communitisation/asha/Studies/Evaluation_of_ASHA_Program_2010-11_Report.pdf . Accessed 17 Dec 2022.

National health systems resource centre. Tenth common review mission: National health mission. https://nhm.gov.in/images/pdf/monitoring/crm/10th-crm/Report/10th_CRM_Main_Report.pdf . Accessed 29 Dec 2022.

DeRenzi B, Wacksman J, Dell N, Lee S, Lesh N, Borriello G, Ellner A. Closing the feedback loop: a 12-month evaluation of ASTA, a self-tracking application for ASHAs. In: DeRenzi B, editor. Proceedings of the Eighth international conference on information and communication technologies and development. New York: Association for Computing Machinery; 2016. p. 1–10. https://doi.org/10.1145/2909609.2909652 .

Chapter   Google Scholar  

Yadav D. Low-cost mobile learning solutions for community health workers. In: Yadav D, editor. Proceedings of the 26th international on world wide web companion. Geneva: International World Wide Web Conferences Steering Committee; 2017. p. 729–34. https://doi.org/10.1145/3041021.3053377 .

Molapo M, Marsden G. Health education in rural communities with locally produced and locally relevant multimedia content. In: Molapo M, editor. Proceedings of the 3rd ACM symposium on computing for development. New York: Association for Computing Machinery; 2013. p. 1–2. https://doi.org/10.1145/2442882.2442913 .

Bhowmick S, Sorathia K. Findings of the user study conducted to understand the training of rural ASHAs in India. In: Bhowmick S, editor. Proceedings of the tenth international conference on information and communication technologies and development. New York: Association for Computing Machinery; 2019. p. 1–5. https://doi.org/10.1145/3287098.3287150 .

Bikinesi L, O’Bryan G, Roscoe C, Mekonen T, Shoopala N, Mengistu AT, et al. Implementation and evaluation of a Project ECHO telementoring program for the Namibian HIV workforce. Hum Resour Health. 2020;18(1):61.

Adsul P, Nethan ST, deCortina SH, Dhanasekaran K, Hariprasad R. Implementing cancer screening programs by training primary care physicians in India—findings from the national institute of cancer prevention research project ECHO for cancer prevention. Glob Implement Res Appl. 2022;2(1):34–41.

White C, McIlfatrick S, Dunwoody L, Watson M. Supporting and improving community health services—a prospective evaluation of ECHO technology in community palliative care nursing teams. BMJ Support Palliat Care. 2019;9(2):202–8.

Usher R, Payne C, Real S, Carey L. Project ECHO: Enhancing palliative care for primary care occupational therapists and physiotherapists in Ireland. Health Soc Care Commun. 2022;30(3):1143–53.

Wood BR, Unruh KT, Martinez-Paz N, Annese M, Ramers CB, Harrington RD, et al. Impact of a telehealth program that delivers remote consultation and longitudinal mentorship to community HIV providers. Open Forum Infect Dis. 2016;3(3):123.

Katzman JG, Comerci GJ, Boyle JF, Duhigg D, Shelley B, Olivas C, et al. Innovative telementoring for pain management: project ECHO pain. J Contin Educ Health Prof. 2014;34(1):68–75.

Kalne PS, Kalne PS, Mehendale AM. Acknowledging the role of community health workers in providing essential healthcare services in rural india—a review. Cureus. 2022;14(9): e29372.

PubMed   PubMed Central   Google Scholar  

Rohith M, Angadi MM. Evaluation of knowledge and practice of ASHAs, regarding child health services in Vijyapaura district, Karnataka. J Fam Med Prim Care. 2020;9(7):3272–6.

Article   CAS   Google Scholar  

Shrivastava A, Srivastava A. Measuring communication competence and effectiveness of ASHAs (accredited social health activist) in their leadership role at rural settings of Uttar Pradesh (India). Leadersh Health Serv. 2016;29(1):69–81.

Scott K, Ummer O, Chamberlain S, Sharma M, Gharai D, Mishra B, et al. ’[We] learned how to speak with love’: a qualitative exploration of accredited social health activist (ASHA) community health worker experiences of the Mobile Academy refresher training in Rajasthan, India. BMJ Open. 2022;12(6): e050363.

Goel AD, Gosain M, Amarchand R, Sharma H, Rai S, Kapoor SK, et al. Effectiveness of a quality improvement program using difference-in-difference analysis for home based newborn care—results of a community intervention trial. Indian J Pediatr. 2019;86(11):1028–35.

Ved R, Scott K. Counseling is a relationship not just a skill: re-conceptualizing health behavior change communication by India’s accredited social health activists. Glob Health Sci Pract. 2020;8(3):332–4.

Abdel-All M, Putica B, Praveen D, Abimbola S, Joshi R. Effectiveness of community health worker training programmes for cardiovascular disease management in low-income and middle-income countries: a systematic review. BMJ Open. 2017;7(11): e015529.

Smittenaar P, Ramesh BM, Jain M, Blanchard J, Kemp H, Engl E, et al. Bringing greater precision to interactions between community health workers and households to improve maternal and newborn health outcomes in India. Glob Health Sci Pract. 2020;8(3):358–71.

Shimasaki S, Bishop E, Guthrie M, Thomas JF. Strengthening the health workforce through the ECHO stages of participation: participants’ perspectives on key facilitators and barriers. J Med Educ Curric Dev. 2019;6:2382120518820922.

Medhanyie AA, Moser A, Spigt M, Yebyo H, Little A, Dinant G, et al. Mobile health data collection at primary health care in Ethiopia: a feasible challenge. J Clin Epidemiol. 2015;68(1):80–6.

Download references

Acknowledgements

The authors wish to thank all the healthcare workers who kindly participated in this study giving their time, experience, and insights. We also thank Dr. Sourabh Chakraborty (Professor, JSPH), Mr. Swapnil Gupta, and the JSPH data collection team for their contribution to the collection of good quality data in a short time.

The study was funded by Extension for Community Healthcare Outcomes (ECHO) India.

Author information

Authors and affiliations.

Public Health Foundation of India (PHFI), Gurugram, Haryana, India

Rajmohan Panda

Extension for Community Healthcare Outcomes (ECHO) India, Okhla Phase III, New Delhi, India

Supriya Lahoti, Nivedita Mishra, Apoorva Karan Rai & Kumud Rai

HexaHealth, Gurugram, Haryana, India

Rajath R. Prabhu

Hamad Medical Corporation, Doha, Qatar

Kalpana Singh

You can also search for this author in PubMed   Google Scholar

Contributions

R.M. contributed to the conception and design of the study and significant inputs for data analysis and made a significant contribution to the drafting of the discussion and conclusion of the paper. S.L. wrote the first draft of the manuscript. N.M. and S.L. contributed to the implementation of the study and development of interview guides, analysis, and validation of qualitative data. R.P. and K.S. contributed to the analysis and validation of quantitative data. R.M., N.M., R.P., K.S, A.K.R. and K.R. reviewed the manuscript and gave significant inputs for improving the paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Supriya Lahoti .

Ethics declarations

Ethics approval and consent to participate.

Ethical clearance was received from the Institutional Ethical Committee (IEC) of the Public Health Foundation of India (PHFI) (ref: TRC-IEC 472/21, dated 26 August 2021). The study has also been registered at the Clinical Trials Registry, India (CTRI/2021/10/037189). All methods were performed in accordance with the relevant guidelines and regulations. A written Participant Information Sheet (PIS) and informed consent form was provided to the participants before conducting the interviews. Verbal informed consent was taken from all participants, and the process of verbal informed consent was approved by the ethics committee (Institutional Ethics Committee (IEC) of the PHFI). Data confidentiality was maintained by coding with the unique identification (ID) of all the participants. The interviews were audio-recorded, and audio files and transcripts were kept in a password-protected folder.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

: Appendix S1. Table S1.1. Satisfaction with different factors of the training. Table S1.2. Satisfaction with content and environment of the training. Table S1.3. Challenges faced with respect to ECHO tele-mentoring model. Table S2. Technical knowledge and skills. Table S3. Statements assessing competence. Table S4. Statements assessing attitude and performance.

Additional file 2

: Appendix S2. Participants in qualitative interviews.

Additional file 3

: Appendix S3. Key informant Interview Guide for Trainers End line Evaluation.

Additional file 4

: Appendix S4. Key informant interview guide for Hub leaders End line Evaluation.

Additional file 5

: Appendix S5. In-depth Interview Guide for ASHAs End line Evaluation.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Panda, R., Lahoti, S., Mishra, N. et al. A mixed methods evaluation of the impact of ECHO ® telementoring model for capacity building of community health workers in India. Hum Resour Health 22 , 26 (2024). https://doi.org/10.1186/s12960-024-00907-y

Download citation

Received : 14 March 2023

Accepted : 10 April 2024

Published : 23 April 2024

DOI : https://doi.org/10.1186/s12960-024-00907-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Community health workers (CHWs)
  • Accredited social health activists (ASHAs)
  • Maternal and child health
  • Primary healthcare
  • Health worker training
  • ECHO telementoring
  • Mixed-method study

Human Resources for Health

ISSN: 1478-4491

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

the impact of quantitative research in social work

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research: Boards Still Have an ESG Expertise Gap — But They’re Improving

  • Tensie Whelan

the impact of quantitative research in social work

Over the last five years, the percentage of Fortune 100 board members possessing relevant credentials rose from 29% to 43%.

The role of U.S. public boards in managing environmental, social, and governance (ESG) issues has significantly evolved over the past five years. Initially, boards were largely unprepared to handle materially financial ESG topics, lacking the necessary background and credentials. However, recent developments show a positive shift, with the percentage of Fortune 100 board members possessing relevant ESG credentials rising from 29% to 43%. This increase is primarily in environmental and governance credentials, while social credentials have seen less growth. Despite this progress, major gaps remain, particularly in climate change and worker welfare expertise. Notably, the creation of dedicated ESG/sustainability committees has surged, promoting better oversight of sustainability issues. This shift is crucial as companies increasingly face both regulatory pressures and strategic opportunities in transitioning to a low carbon economy.

Knowing the right questions to ask management on material environmental, social, and governance issues has become an important part of a board’s role. Five years ago, our research at NYU Stern Center for Sustainable Business found U.S. public boards were not fit for this purpose — very few had the background and credentials necessary to provide oversight of  ESG topics such as climate, employee welfare, financial hygiene, and cybersecurity. Today, we find that while boards are still woefully underprepared in certain areas, there has been some important progress .

  • TW Tensie Whelan is a clinical professor of business and society and the director of the NYU Stern Center for Sustainable Business, and she sits on the advisory boards of Arabesque and Inherent Group.

Partner Center

IMAGES

  1. Quantitative Social Research

    the impact of quantitative research in social work

  2. Chapter 3

    the impact of quantitative research in social work

  3. Explain the Difference Between Qualitative and Quantitative Research

    the impact of quantitative research in social work

  4. A Quick Guide to Quantitative Research in the Social Sciences

    the impact of quantitative research in social work

  5. The Importance of Quantitative Research Across Fields (Practical

    the impact of quantitative research in social work

  6. (PDF) Quantitative Methods for Social Sciences

    the impact of quantitative research in social work

VIDEO

  1. Social Work Research: Steps/Procedure

  2. The Impact of social media on the academic performance of social science students at UWI T&T

  3. Social Impact Assessment (SIA)

  4. Measuring Social Value in Economic Appraisal Research Insights

  5. Quantitative Research Designs

  6. Transformation of Traditional Research into Applied Innovation

COMMENTS

  1. The impact of quantitative research in social work

    This paper is the first to focus on the academic impact of quantitative research in social work developing measurable outcomes. It focuses on three leading British-based generic journals over a 10-year period, encapsulating 1490 original articles. Impact is measured through three indices: Google Scholar and Web of Science Citations, and downloads.

  2. Quantitative Research Methods for Social Work: Making Social Work Count

    This book arose from funding from the Economic and Social Research Council to address the quantitative skills gap in the social sciences. The grants were applied for under the auspices of the Joint University Council Social Work Education Committee to upskill social work academics and develop a curriculum resource with teaching aids.

  3. The Positive Contributions of Quantitative Methodology to Social Work

    Quantitative social work research does face peculiarly acute difficulties arising from the intangible nature of its variables, the fluid, probabilistic way in which these variables are connected, and the degree to which outcome criteria are subject to dispute. ... IMPACT Intensive Matched Probation and Aftercare Treatment (II): The results of ...

  4. The Nature and Extent of Quantitative Research in Social Work: A Ten

    Nature and Extent of Quantitative Research in Social Work 1521 Introduction Quantitative work seems to present many people in social work with particu lar problems. Sharland's (2009) authoritative review notes the difficulty 'with people doing qualitative research not by choice but because it's the only thing they feel safe in' (p. 31).

  5. Nature and Extent of Quantitative Research in Social Work Journals: A

    Sebastian Kurten, Nausikaä Brimmel, Kathrin Klein, Katharina Hutter, Nature and Extent of Quantitative Research in Social Work Journals: A Systematic Review from 2016 to 2020, The British Journal of Social Work, Volume 52, Issue 4, ... The Perceived Impact of the COVID-19 Pandemic on the Mental Health and Well-being of Care-Experienced People

  6. Shaping Social Work Science: What Should Quantitative Researchers Do

    Based on a review of economists' debates on mathematical economics, this article discusses a key issue for shaping the science of social work—research methodology. The article describes three important tasks quantitative researchers need to fulfill in order to enhance the scientific rigor of social work research.

  7. The impact of quantitative research in social work

    ABSTRACT The importance of quantitative research in the social sciences generally and social work specifically has been highlighted in recent years, in both an international and a British context. Consensus opinion in the UK is that quantitative work is the 'poor relation' in social work research, leading to a number of initiatives. However, Sharland's UK work involves interviews with ...

  8. The impact of quantitative research in social work

    This paper is the first to focus on the academic impact of quantitative research in social work developing measurable outcomes. It focuses on three leading British-based generic journals over a 10 ...

  9. Nature and Extent of Quantitative Research in Social Work: A Ten-Year

    Despite these initiatives, however, very little evidence exists about the nature, range and scope of quantitative research in social work. The research reported here draws on findings from a detailed analysis of 1,490 articles published over ten years in major British-based, but international in scope, refereed journals. The findings paint a ...

  10. Nature and Extent of Quantitative Research in Social Work Journals: A

    This study reviews 1,406 research articles published between 2016 and 2020 in the European Journal of Social Work (EJSW), the British Journal of Social Work (BJSW) and Research on Social Work ...

  11. Quantitative Research Methods for Social Work: Making Social Work Count

    The book is a comprehensive resource for students and educators. It is packed with activities and examples from social work covering the basic concepts of quantitative research methods - including reliability, validity, probability, variables and hypothesis testing - and explores key areas of data collection, analysis and evaluation ...

  12. Review of Social work research and evaluation: Quantitative and

    Reviews the book, Social Work Research and Evaluation: Quantitative and Qualitative Approaches edited by Richard M. Grinnell Jr. and Yvonne A. Unrau (2005). This book examines the basic tenets of quantitative and qualitative research methods with the aim of preparing practitioners for "becoming beginning critical consumers of the professional research literature." This textbook is broad in ...

  13. 11. Quantitative measurement

    Step 1: Specifying variables and attributes. The first component, the variable, should be the easiest part. At this point in quantitative research, you should have a research question that has at least one independent and at least one dependent variable. Remember that variables must be able to vary.

  14. Work-life balance, social support, and burnout: A quantitative study of

    Burnout has been reported to adversely impact health and wellbeing in social workers and is associated with a range of pathological ... Garza A. S., Slaughter J. E. (2011). Work engagement: a quantitative review and test of its relations with task and contextual performance. Personnel ... Research on Social Work Practice, 31(5 ...

  15. Module 3 Chapter 3: Overview of Quantitative Traditions

    Module 3 Chapter 3: Overview of Quantitative Traditions. Just as there were multiple traditions under the umbrella heading of qualitative research approaches, different types of quantitative research approaches are used in social work, social, and behavioral sciences. In this chapter you are introduced to: cross-sectional research.

  16. What is Quantitative Research?

    Quantitative methods emphasize objective measurements and the statistical, mathematical, or numerical analysis of data collected through polls, questionnaires, and surveys, or by manipulating pre-existing statistical data using computational techniques.Quantitative research focuses on gathering numerical data and generalizing it across groups of people or to explain a particular phenomenon.

  17. Causality and Causal Inference in Social Work: Quantitative and

    The Nature of Causality and Causal Inference. The human sciences, including social work, place great emphasis on understanding the causes and effects of human behavior, yet there is a lack of consensus as to how cause and effect can and should be linked (Parascandola & Weed, 2001; Salmon, 1998; Susser, 1973).What little consensus exists seems to be that effects are assumed to be consequences ...

  18. 10. Quantitative sampling

    Each setting (agency, social media) limits your reach to only a small segment of your target population who has the opportunity to be a part of your study. This intermediate point between the overall population and the sample of people who actually participate in the researcher's study is called a sampling frame.

  19. 8.3 Quantitative research questions

    Quantitative descriptive questions. The type of research you are conducting will impact the research question that you ask. Quantitative descriptive questions are arguably the easiest types of questions to formulate. For example, "What is the average student debt load of MSW students?" is an important descriptive question.

  20. 4.3 Quantitative research questions

    The type of research you are conducting will impact the research question that you ask. Probably the easiest questions to think of are quantitative descriptive questions. For example, "What is the average student debt load of MSW students?" is a descriptive question—and an important one. We aren't trying to build a causal relationship here.

  21. The Pursuit of Quality for Social Work Practice: Three Generations and

    Outcomes are key to documenting the impact of social work interventions. My 1978 "specifying" paper with Rosen emphasized that the effectiveness of social work practice could not be adequately evaluated without clear specification and measurement of various types of outcomes. ... Intervention research in social work: Recent advances and ...

  22. Can Quantitative Research Solve Social Problems? Pragmatism ...

    Journal of Business Ethics recently published a critique of ethical practices in quantitative research by Zyphur and Pierides (J Bus Ethics 143:1-16, 2017). The authors argued that quantitative research prevents researchers from addressing urgent problems facing humanity today, such as poverty, racial inequality, and climate change. I offer comments and observations on the authors ...

  23. The Impact of Social Media Influencers on Consumer Behavior

    The research topic explores the profound influence of social media influencers on consumer behavior in the contemporary digital landscape. As social media platforms continue to dominate the online ...

  24. Exploring sustainable development & the human impact of natural

    A Q&A with research assistant professor Chenyi Ma. What factors allow people to prepare for and recover from natural disasters? Dr. Chenyi Ma, a research assistant professor at Penn's School of Social Policy & Practice (SP2), conducts interdisciplinary research that investigates the role of inequality in disasters' impact and points to policy solutions.

  25. Quantitative study assesses how gender and race impact young athletes

    Across the U.S., there are over 8 million student-athletes in high school and college. Engaging in sports can contribute to physical, mental, and social benefits, and coaches can play a key role ...

  26. Research on Social Work Practice: Sage Journals

    Impact Factor: 1.8 5-Year Impact Factor: 2.2. JOURNAL HOMEPAGE. Research on Social Work Practice (RSWP), peer-reviewed and published eight times per year, is a disciplinary journal devoted to the publication of empirical research concerning the assessment methods and outcomes of social work practice. … | View full journal description.

  27. A mixed methods evaluation of the impact of ECHO® telementoring model

    The present study intends to assess the impact of the training program for improving the knowledge and skills of ASHA workers. We conducted a pre-post quasi-experimental study using a convergent parallel mixed-method approach. The quantitative survey (n = 490) assessed learning competence, performance, and satisfaction of the ASHAs.

  28. Research: Boards Still Have an ESG Expertise Gap

    Over the last five years, the percentage of Fortune 100 board members possessing relevant credentials rose from 29% to 43%. The role of U.S. public boards in managing environmental, social, and ...

  29. Screen Struggles and Screen Delight Is Social Media Sabotaging or

    Her presentation will conclude by outlining future research directions that will further our understanding of social media's role in adolescent lives. About the Speaker: A University Distinguished Professor at the University of Amsterdam, Patti Valkenburg's research focuses on the impact of (social) media on youth and adults.

  30. Societies

    This research aims to analyse the work of two international information verification agencies on TikTok ─MediaWise and Politifact—according to their evolution, approach, content, and format. To this end, a quantitative approach has been used with an inductive content analysis with nominal variables, which offers specific nuances adapted to the unit of analysis.