Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 11: Presenting Your Research

Writing a Research Report in American Psychological Association (APA) Style

Learning Objectives

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a  title page . The title is centred in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioural Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behaviour?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal Psychological Science .

  • “Smells Like Clean Spirit: Nonconscious Effects of Scent on Cognition and Behavior”
  • “Time Crawls: The Temporal Resolution of Infants’ Visual Attention”
  • “Scent of a Woman: Men’s Testosterone Responses to Olfactory Ovulation Cues”
  • “Apocalypse Soon?: Dire Messages Reduce Belief in Global Warming by Contradicting Just-World Beliefs”
  • “Serial vs. Parallel Processing: Sometimes They Look Like Tweedledum and Tweedledee but They Can (and Should) Be Distinguished”
  • “How Do I Love Thee? Let Me Count the Words: The Social Effects of Expressive Writing”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The  abstract  is a summary of the study. It is the second page of the manuscript and is headed with the word  Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The  introduction  begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The  opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behaviour (not about researchers or their research; Bem, 2003 [1] ). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that he or she enjoys smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humourous anecdote:

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (Jacoby, 1999, p. 3)

Although both humour and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the  literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favourite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the  balance  of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to  ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The  closing  of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question or hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) [2] concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behaviour during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions. (p. 378)

Thus the introduction leads smoothly into the next major section of the article—the method section.

The  method section  is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centred on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Three ways of organizing an APA-style method. Long description available.

After the participants section, the structure can vary a bit. Figure 11.1 shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on.

The  results section  is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Several journals now encourage the open sharing of raw data online.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A third preliminary issue is the reliability of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items. A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) [3] suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The  discussion  is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how  can  they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they  would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What  new  research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968) [4] , for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end when you have made your final point (although you should avoid ending on a limitation).

The references section begins on a new page with the heading “References” centred at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendices, Tables, and Figures

Appendices, tables, and figures come after the references. An  appendix  is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centred at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendices come tables and then figures. Tables and figures are both used to present results. Figures can also be used to illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

""

Key Takeaways

  • An APA-style empirical research report consists of several standard sections. The main ones are the abstract, introduction, method, results, discussion, and references.
  • The introduction consists of an opening that presents the research question, a literature review that describes previous research on the topic, and a closing that restates the research question and comments on the method. The literature review constitutes an argument for why the current study is worth doing.
  • The method section describes the method in enough detail that another researcher could replicate the study. At a minimum, it consists of a participants subsection and a design and procedure subsection.
  • The results section describes the results in an organized fashion. Each primary result is presented in terms of statistical results but also explained in words.
  • The discussion typically summarizes the study, discusses theoretical and practical implications and limitations of the study, and offers suggestions for further research.
  • Practice: Look through an issue of a general interest professional journal (e.g.,  Psychological Science ). Read the opening of the first five articles and rate the effectiveness of each one from 1 ( very ineffective ) to 5 ( very effective ). Write a sentence or two explaining each rating.
  • Practice: Find a recent article in a professional journal and identify where the opening, literature review, and closing of the introduction begin and end.
  • Practice: Find a recent article in a professional journal and highlight in a different colour each of the following elements in the discussion: summary, theoretical implications, practical implications, limitations, and suggestions for future research.

Long Descriptions

Figure 11.1 long description: Table showing three ways of organizing an APA-style method section.

In the simple method, there are two subheadings: “Participants” (which might begin “The participants were…”) and “Design and procedure” (which might begin “There were three conditions…”).

In the typical method, there are three subheadings: “Participants” (“The participants were…”), “Design” (“There were three conditions…”), and “Procedure” (“Participants viewed each stimulus on the computer screen…”).

In the complex method, there are four subheadings: “Participants” (“The participants were…”), “Materials” (“The stimuli were…”), “Design” (“There were three conditions…”), and “Procedure” (“Participants viewed each stimulus on the computer screen…”). [Return to Figure 11.1]

  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.),  The compleat academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility.  Journal of Personality and Social Psychology, 4 , 377–383. ↵

A type of research article which describes one or more new empirical studies conducted by the authors.

The page at the beginning of an APA-style research report containing the title of the article, the authors’ names, and their institutional affiliation.

A summary of a research study.

The third page of a manuscript containing the research question, the literature review, and comments about how to answer the research question.

An introduction to the research question and explanation for why this question is interesting.

A description of relevant previous research on the topic being discusses and an argument for why the research is worth addressing.

The end of the introduction, where the research question is reiterated and the method is commented upon.

The section of a research report where the method used to conduct the study is described.

The main results of the study, including the results from statistical analyses, are presented in a research article.

Section of a research report that summarizes the study's results and interprets them by referring back to the study's theoretical background.

Part of a research report which contains supplemental material.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

example research report psychology

Lab Report Format: Step-by-Step Guide & Examples

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

In psychology, a lab report outlines a study’s objectives, methods, results, discussion, and conclusions, ensuring clarity and adherence to APA (or relevant) formatting guidelines.

A typical lab report would include the following sections: title, abstract, introduction, method, results, and discussion.

The title page, abstract, references, and appendices are started on separate pages (subsections from the main body of the report are not). Use double-line spacing of text, font size 12, and include page numbers.

The report should have a thread of arguments linking the prediction in the introduction to the content of the discussion.

This must indicate what the study is about. It must include the variables under investigation. It should not be written as a question.

Title pages should be formatted in APA style .

The abstract provides a concise and comprehensive summary of a research report. Your style should be brief but not use note form. Look at examples in journal articles . It should aim to explain very briefly (about 150 words) the following:

  • Start with a one/two sentence summary, providing the aim and rationale for the study.
  • Describe participants and setting: who, when, where, how many, and what groups?
  • Describe the method: what design, what experimental treatment, what questionnaires, surveys, or tests were used.
  • Describe the major findings, including a mention of the statistics used and the significance levels, or simply one sentence summing up the outcome.
  • The final sentence(s) outline the study’s “contribution to knowledge” within the literature. What does it all mean? Mention the implications of your findings if appropriate.

The abstract comes at the beginning of your report but is written at the end (as it summarises information from all the other sections of the report).

Introduction

The purpose of the introduction is to explain where your hypothesis comes from (i.e., it should provide a rationale for your research study).

Ideally, the introduction should have a funnel structure: Start broad and then become more specific. The aims should not appear out of thin air; the preceding review of psychological literature should lead logically into the aims and hypotheses.

The funnel structure of the introducion to a lab report

  • Start with general theory, briefly introducing the topic. Define the important key terms.
  • Explain the theoretical framework.
  • Summarise and synthesize previous studies – What was the purpose? Who were the participants? What did they do? What did they find? What do these results mean? How do the results relate to the theoretical framework?
  • Rationale: How does the current study address a gap in the literature? Perhaps it overcomes a limitation of previous research.
  • Aims and hypothesis. Write a paragraph explaining what you plan to investigate and make a clear and concise prediction regarding the results you expect to find.

There should be a logical progression of ideas that aids the flow of the report. This means the studies outlined should lead logically to your aims and hypotheses.

Do be concise and selective, and avoid the temptation to include anything in case it is relevant (i.e., don’t write a shopping list of studies).

USE THE FOLLOWING SUBHEADINGS:

Participants

  • How many participants were recruited?
  • Say how you obtained your sample (e.g., opportunity sample).
  • Give relevant demographic details (e.g., gender, ethnicity, age range, mean age, and standard deviation).
  • State the experimental design .
  • What were the independent and dependent variables ? Make sure the independent variable is labeled and name the different conditions/levels.
  • For example, if gender is the independent variable label, then male and female are the levels/conditions/groups.
  • How were the IV and DV operationalized?
  • Identify any controls used, e.g., counterbalancing and control of extraneous variables.
  • List all the materials and measures (e.g., what was the title of the questionnaire? Was it adapted from a study?).
  • You do not need to include wholesale replication of materials – instead, include a ‘sensible’ (illustrate) level of detail. For example, give examples of questionnaire items.
  • Include the reliability (e.g., alpha values) for the measure(s).
  • Describe the precise procedure you followed when conducting your research, i.e., exactly what you did.
  • Describe in sufficient detail to allow for replication of findings.
  • Be concise in your description and omit extraneous/trivial details, e.g., you don’t need to include details regarding instructions, debrief, record sheets, etc.
  • Assume the reader has no knowledge of what you did and ensure that he/she can replicate (i.e., copy) your study exactly by what you write in this section.
  • Write in the past tense.
  • Don’t justify or explain in the Method (e.g., why you chose a particular sampling method); just report what you did.
  • Only give enough detail for someone to replicate the experiment – be concise in your writing.
  • The results section of a paper usually presents descriptive statistics followed by inferential statistics.
  • Report the means, standard deviations, and 95% confidence intervals (CIs) for each IV level. If you have four to 20 numbers to present, a well-presented table is best, APA style.
  • Name the statistical test being used.
  • Report appropriate statistics (e.g., t-scores, p values ).
  • Report the magnitude (e.g., are the results significant or not?) as well as the direction of the results (e.g., which group performed better?).
  • It is optional to report the effect size (this does not appear on the SPSS output).
  • Avoid interpreting the results (save this for the discussion).
  • Make sure the results are presented clearly and concisely. A table can be used to display descriptive statistics if this makes the data easier to understand.
  • DO NOT include any raw data.
  • Follow APA style.

Use APA Style

  • Numbers reported to 2 d.p. (incl. 0 before the decimal if 1.00, e.g., “0.51”). The exceptions to this rule: Numbers which can never exceed 1.0 (e.g., p -values, r-values): report to 3 d.p. and do not include 0 before the decimal place, e.g., “.001”.
  • Percentages and degrees of freedom: report as whole numbers.
  • Statistical symbols that are not Greek letters should be italicized (e.g., M , SD , t , X 2 , F , p , d ).
  • Include spaces on either side of the equals sign.
  • When reporting 95%, CIs (confidence intervals), upper and lower limits are given inside square brackets, e.g., “95% CI [73.37, 102.23]”
  • Outline your findings in plain English (avoid statistical jargon) and relate your results to your hypothesis, e.g., is it supported or rejected?
  • Compare your results to background materials from the introduction section. Are your results similar or different? Discuss why/why not.
  • How confident can we be in the results? Acknowledge limitations, but only if they can explain the result obtained. If the study has found a reliable effect, be very careful suggesting limitations as you are doubting your results. Unless you can think of any c onfounding variable that can explain the results instead of the IV, it would be advisable to leave the section out.
  • Suggest constructive ways to improve your study if appropriate.
  • What are the implications of your findings? Say what your findings mean for how people behave in the real world.
  • Suggest an idea for further research triggered by your study, something in the same area but not simply an improved version of yours. Perhaps you could base this on a limitation of your study.
  • Concluding paragraph – Finish with a statement of your findings and the key points of the discussion (e.g., interpretation and implications) in no more than 3 or 4 sentences.

Reference Page

The reference section lists all the sources cited in the essay (alphabetically). It is not a bibliography (a list of the books you used).

In simple terms, every time you refer to a psychologist’s name (and date), you need to reference the original source of information.

If you have been using textbooks this is easy as the references are usually at the back of the book and you can just copy them down. If you have been using websites then you may have a problem as they might not provide a reference section for you to copy.

References need to be set out APA style :

Author, A. A. (year). Title of work . Location: Publisher.

Journal Articles

Author, A. A., Author, B. B., & Author, C. C. (year). Article title. Journal Title, volume number (issue number), page numbers

A simple way to write your reference section is to use Google scholar . Just type the name and date of the psychologist in the search box and click on the “cite” link.

google scholar search results

Next, copy and paste the APA reference into the reference section of your essay.

apa reference

Once again, remember that references need to be in alphabetical order according to surname.

Psychology Lab Report Example

Quantitative paper template.

Quantitative professional paper template: Adapted from “Fake News, Fast and Slow: Deliberation Reduces Belief in False (but Not True) News Headlines,” by B. Bago, D. G. Rand, and G. Pennycook, 2020,  Journal of Experimental Psychology: General ,  149 (8), pp. 1608–1613 ( https://doi.org/10.1037/xge0000729 ). Copyright 2020 by the American Psychological Association.

Qualitative paper template

Qualitative professional paper template: Adapted from “‘My Smartphone Is an Extension of Myself’: A Holistic Qualitative Exploration of the Impact of Using a Smartphone,” by L. J. Harkin and D. Kuss, 2020,  Psychology of Popular Media ,  10 (1), pp. 28–38 ( https://doi.org/10.1037/ppm0000278 ). Copyright 2020 by the American Psychological Association.

Print Friendly, PDF & Email

Providing a study guide and revision resources for students and psychology teaching resources for teachers.

Psychological Report Writing

March 8, 2021 - paper 2 psychology in context | research methods.

  • Back to Paper 2 - Research Methods

Writing up Psychological Investigations

Through using this website, you have learned about, referred to, and evaluated research studies. These research studies are generally presented to the scientific community as a journal article. Most journal articles follow a standard format. This is similar to the way you may have written up experiments in other sciences.

In research report there are usually six sub-sections:

(1)  Abstract:  This is always written last because it is a very brief summary:

  • Include a one sentence summary, giving the topic to be studied. This may include the hypothesis and some brief theoretical background research, for example the name of the researchers whose work you have replicated.
  • Describe the participants, number used and how they were selected.
  • Describe the method and design used and any questionnaires etc. you employed.
  • State your major findings, which should include a mention of the statistics used the observed and critical values and whether or not your results were found to be significant, including the level of significance
  • Briefly summarise what your study shows, the conclusion of your findings and any implications it may have. State whether the experimental or null hypothesis has been accepted/rejected.
  • This should be around 150 words.

(2) Introduction:

This tells everyone why the study is being carried out and the commentary should form a ‘funnel’ of information. First, there is broad coverage of all the background research with appropriate evaluative comments: “Asch (1951) found…but Crutchfield (1955) showed…” Once the general research has been covered, the focus becomes much narrower finishing with the main researcher/research area you are hoping to support/refute. This then leads to the aims and hypothesis/hypotheses (i.e. experimental and null hypotheses) being stated.

(3) Method:

Method – this section is split into sub-sections:

(1) Design:

  • What is the experimental method that has been used?
  • Experimental Design type independent groups, repeated measures, matched pairs? Justify?
  • What is the IV, DV? These should be operationalised.
  • Any potential EVs?
  • How will these EVs be overcome?
  • Ethical issues? Strategies to overcome these ethical issues

(2) Participants:

  • Who is the target population? Age/socio-economic status, gender, etc.
  • What sampling technique has been used? Why?
  • Details of participants that have been used? Do they have certain characteristics
  • How have participants been allocated to conditions

(3) Materials:

  • Description of all equipment used and how to use it (essential for replication)
  • Stimulus materials for participants should be in the appendix

(4) Procedure:

  • This is a step-by-step guide of how the study was carried out when, where, how
  • Instructions to participants must be standardised to allow replication
  • Lengthy sets of instructions and instructions to participants should be in the appendix

(4) Results:

This section contains:

  • A summary of the data. All raw data and calculations are put in the appendix.
  • This generally starts with a section of descriptive statistics measures of central tendency and dispersion.
  • Summary tables, which should be clearly labelled and referred to in the text, e.g., “Table One shows that…” Graphical representations of the data must also be clear and properly labelled and referred to in the text, e.g., “It can be seen from Figure 1 that…”
  • Once the summary statistics have been explained, there should be an analysis of the results of any inferential tests, including observed values, how these relate to the critical table value, significance level and whether the test was one- or two-tailed.
  • This section finishes with the rejection or acceptance of the null hypothesis.

(5) Discussion:

This sounds like a repeat of the results section, but here you need to state what you’ve found in terms of psychology rather than in statistical terms, in particular relate your findings to your hypotheses. Mention the strength of your findings, for example were they significant and at what level. If your hypothesis was one tailed and your results have gone in the opposite direction this needs to be indicated. If you have any additional findings to report, other than those relating to the hypotheses then they too can be included.

All studies have flaws, so anything that went wrong or the limitations of the study are discussed together with suggestions for how it could be improved if it were to be repeated. Suggestions for alternative studies and future research are also explored. The discussion ends with a paragraph summing up what was found and assessing the implications of the study and any conclusions that can be drawn from it.

(6) Referencing (Harvard Referencing):

References should contain details of all the research covered in a psychological report. It is not sufficient to simply list the books used.

What you should do:

Look through your report and include a reference every researcher mentioned. A reference should include; the name of the researcher, the date the research was published, the title of the book/journal, where the book was published (or what journal the article was published in), the edition number of the book/volume of the journal article, the page numbers used.

Example: Paivio, A., Madigan, S.A. (1970). Noun imagery and frequency in paired-associate and free learning recall. Canadian Journal of Psychology. 24, pp353-361.

Other Rules Make sure that the references are placed in alphabetical order.

Exam Tip:  In the exam, the types of questions you could expect relating to report writing include; defining what information you would find in each section of the report, in addition, on the old specification, questions linked to report writing have included; writing up a method section, results section and designing a piece of research.

In addition, in the exam, you may get asked to write; a  consent form ,  debriefing sheet  or a set of  standardised instructions.

Writing a Consent Form for a Psychological Report Remember the mnemonic TAPCHIPS

Your consent form should include the following;

(1)  T itle of the Project:

(2)  A im of the study?

(3)  P rocedure – What will I be asked to do if I take part?

You should give a brief description of what the participants will have to do if they decide to consent to take part in the study (i.e. complete a 15-minute memory test etc )

(4) Will your data be kept  C onfidential?

Explain how you will make sure that all personal details will be kept confidential.

(5) Do I  H ave to take part?

Explain to the participant that they don’t have to take part in the study, explain about their right to withdraw.

(6)  I nformation? Where can I obtained further information if I need it?

Provide the participant with the contact details of the key researchers carrying out the study.

(7)  P articipant responses to the following questions:

Have you received enough information about the study? YES/NO

Do you consent for your data to be used in this study and retained for use in other studies? YES/NO

Do you understand that you do not need to take part in the study and that you can; withdraw your participation at any time without reason or detriment? YES/NO

(8)  S ignature from the participant and the researcher: will need to be acquired at the bottom of the consent form.

Writing a set of Standardised Instructions for a Psychological Investigation

When writing a set of standardised instructions, it is essential that you include:

1. Enough information to allow for replication of the study

2. You must write the instructions so that they can simply be read out by the researcher to the participants.

3. You should welcome the participants to the study.

4. Thank the participants for giving their consent to take part.

5. Explain to the participants what will happen in the study, what they will be expected to do (step by step), how long the task/specific parts of the task will take to complete.

6. Remind participants that they have the right to withdraw throughout the study.

7. Ask that participants at the end if they have any questions

8. Check that the participants are still happy to proceed with the study.

Writing a Debriefing Form for a Psychological Report

This is the form that you should complete with your participants at the end of the study to ensure that they are happy with the way the study has been conducted, to explain to them the true nature of the study, to confirm consent and to give them the researcher’s contact details in case they want to ask any further questions.

  • Thank  the participants for taking part in the study.
  • Outline the true aims  of the research (what were the participants expected to do? What happened in each of the different conditions?)
  • Explain what you were  looking to find.
  • Explain  how the data will be used  now and in the future.
  • Remind  the participants that they have the  right to withdraw  now and after the study.
  • Thank  participants once  again  for taking part.
  • Remind the participant of the  researcher(s) contact details.

Designing Research

One of the questions that you may get asked in the exam is to design a piece of research. The best way to go about this is to include similar information to what you would when writing up the  method section of a psychological report.

Things to Consider…

  • What is the experimental method/non-experimental method will you use?  ( Lab, field, natural experiment? Questionnaire (open/closed questions?), Interviews (structured, unstructured, semi-structured?), Observation).
  • Why?   ( does this method allow a great deal of control? Is it in a natural setting and would show behaviour reflective of real life? Would it allow participants to remain anonymous and therefore, they are more likely to tell the truth/act in a realistic way? Does the method avoid demand characteristics?) 
  • Experimental Design type   ( independent groups, repeated measures, matched pairs? Justify you choice?)
  • What is the IV, DV? These should be operationalised  ( how are you going to measure these variables?)
  • Any potential EVs?  ( Participant variables, experimenter effects, demand characteristics, situational variables?)
  • How will these EVs be overcome?  ( Are you going to out some control mechanisms in place? Are you going to use standardised instructions? Double or single blind? Will the experimental design that you are using help to overcome EVs?)
  • Ethical issues?  ( What are the potential ethical issues and what strategies are you going to use to overcome these ethical issues?)
  • Who is the target population?  Age/socio-economic status, gender, etc.
  • How have participants been allocated to conditions  ( have you used random allocation? Why have you adopted this technique?
  • This is a step-by-step guide of how the study was carried out – from beginning to end, how are you going to carry out the study.
  • Psychopathology
  • Social Psychology
  • Approaches To Human Behaviour
  • Biopsychology
  • Research Methods
  • Issues & Debates
  • Teacher Hub
  • Terms and Conditions
  • Privacy Policy
  • Cookie Policy
  • [email protected]
  • www.psychologyhub.co.uk

captcha txt

We're not around right now. But you can send us an email and we'll get back to you, asap.

Start typing and press Enter to search

Cookie Policy - Terms and Conditions - Privacy Policy

example research report psychology

Logo for Open Library Publishing Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

49 Writing a Research Report in American Psychological Association (APA) Style

Learning objectives.

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a title page . The title is centered in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioral Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behavior?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

It’s  Soooo  Cute!  How Informal Should an Article Title Be?

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal Psychological Science .

  • “Smells Like Clean Spirit: Nonconscious Effects of Scent on Cognition and Behavior”
  • “Time Crawls: The Temporal Resolution of Infants’ Visual Attention”
  • “Scent of a Woman: Men’s Testosterone Responses to Olfactory Ovulation Cues”
  • “Apocalypse Soon?: Dire Messages Reduce Belief in Global Warming by Contradicting Just-World Beliefs”
  • “Serial vs. Parallel Processing: Sometimes They Look Like Tweedledum and Tweedledee but They Can (and Should) Be Distinguished”
  • “How Do I Love Thee? Let Me Count the Words: The Social Effects of Expressive Writing”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The abstract is a summary of the study. It is the second page of the manuscript and is headed with the word  Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The introduction begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behavior (not about researchers or their research; Bem, 2003 [1] ). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that they enjoy smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humorous anecdote:

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (Jacoby, 1999, p. 3)

Although both humor and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the  literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favorite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the  balance  of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to  ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The closing of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question and hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) [2] concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behavior during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions. (p. 378)

Thus the introduction leads smoothly into the next major section of the article—the method section.

The  method section  is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned to conditions, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centered on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Figure 11.1 Three Ways of Organizing an APA-Style Method

After the participants section, the structure can vary a bit. Figure 11.1 shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on. The materials subsection is also a good place to refer to the reliability and/or validity of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items and that they accurately measure what they are intended to measure.

The  results section is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Many journals encourage the open sharing of raw data online, and some now require open data and materials before publication.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) [3] suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The discussion is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how  can  they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they  would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What  new  research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968) [4] , for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end by returning to the problem or issue introduced in your opening paragraph and clearly stating how your research has addressed that issue or problem.

The references section begins on a new page with the heading “References” centered at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendices, Tables, and Figures

Appendices, tables, and figures come after the references. An appendix is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centered at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendices come tables and then figures. Tables and figures are both used to present results. Figures can also be used to display graphs, illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

Figure 11.2 Title Page and Abstract. This student paper does not include the author note on the title page. The abstract appears on its own page.

  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.), The complete academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 4 , 377–383. ↵
  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.),  The complete academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility.  Journal of Personality and Social Psychology, 4 , 377–383. ↵
  • Define non-experimental research, distinguish it clearly from experimental research, and give several examples.
  • Explain when a researcher might choose to conduct non-experimental research as opposed to experimental research.

What Is Non-Experimental Research?

Non-experimental research  is research that lacks the manipulation of an independent variable. Rather than manipulating an independent variable, researchers conducting non-experimental research simply measure variables as they naturally occur (in the lab or real world).

Most researchers in psychology consider the distinction between experimental and non-experimental research to be an extremely important one. This is because although experimental research can provide strong evidence that changes in an independent variable cause differences in a dependent variable, non-experimental research generally cannot. As we will see, however, this inability to make causal conclusions does not mean that non-experimental research is less important than experimental research. It is simply used in cases where experimental research is not able to be carried out.

When to Use Non-Experimental Research

As we saw in the last chapter , experimental research is appropriate when the researcher has a specific research question or hypothesis about a causal relationship between two variables—and it is possible, feasible, and ethical to manipulate the independent variable. It stands to reason, therefore, that non-experimental research is appropriate—even necessary—when these conditions are not met. There are many times in which non-experimental research is preferred, including when:

  • the research question or hypothesis relates to a single variable rather than a statistical relationship between two variables (e.g., how accurate are people’s first impressions?).
  • the research question pertains to a non-causal statistical relationship between variables (e.g., is there a correlation between verbal intelligence and mathematical intelligence?).
  • the research question is about a causal relationship, but the independent variable cannot be manipulated or participants cannot be randomly assigned to conditions or orders of conditions for practical or ethical reasons (e.g., does damage to a person’s hippocampus impair the formation of long-term memory traces?).
  • the research question is broad and exploratory, or is about what it is like to have a particular experience (e.g., what is it like to be a working mother diagnosed with depression?).

Again, the choice between the experimental and non-experimental approaches is generally dictated by the nature of the research question. Recall the three goals of science are to describe, to predict, and to explain. If the goal is to explain and the research question pertains to causal relationships, then the experimental approach is typically preferred. If the goal is to describe or to predict, a non-experimental approach is appropriate. But the two approaches can also be used to address the same research question in complementary ways. For example, in Milgram's original (non-experimental) obedience study, he was primarily interested in one variable—the extent to which participants obeyed the researcher when he told them to shock the confederate—and he observed all participants performing the same task under the same conditions. However,  Milgram subsequently conducted experiments to explore the factors that affect obedience. He manipulated several independent variables, such as the distance between the experimenter and the participant, the participant and the confederate, and the location of the study (Milgram, 1974) [1] .

Types of Non-Experimental Research

Non-experimental research falls into two broad categories: correlational research and observational research. 

The most common type of non-experimental research conducted in psychology is correlational research. Correlational research is considered non-experimental because it focuses on the statistical relationship between two variables but does not include the manipulation of an independent variable. More specifically, in correlational research , the researcher measures two variables with little or no attempt to control extraneous variables and then assesses the relationship between them. As an example, a researcher interested in the relationship between self-esteem and school achievement could collect data on students' self-esteem and their GPAs to see if the two variables are statistically related.

Observational research  is non-experimental because it focuses on making observations of behavior in a natural or laboratory setting without manipulating anything. Milgram’s original obedience study was non-experimental in this way. He was primarily interested in the extent to which participants obeyed the researcher when he told them to shock the confederate and he observed all participants performing the same task under the same conditions. The study by Loftus and Pickrell described at the beginning of this chapter is also a good example of observational research. The variable was whether participants “remembered” having experienced mildly traumatic childhood events (e.g., getting lost in a shopping mall) that they had not actually experienced but that the researchers asked them about repeatedly. In this particular study, nearly a third of the participants “remembered” at least one event. (As with Milgram’s original study, this study inspired several later experiments on the factors that affect false memories).

Cross-Sectional, Longitudinal, and Cross-Sequential Studies

When psychologists wish to study change over time (for example, when developmental psychologists wish to study aging) they usually take one of three non-experimental approaches: cross-sectional, longitudinal, or cross-sequential. Cross-sectional studies involve comparing two or more pre-existing groups of people (e.g., children at different stages of development). What makes this approach non-experimental is that there is no manipulation of an independent variable and no random assignment of participants to groups. Using this design, developmental psychologists compare groups of people of different ages (e.g., young adults spanning from 18-25 years of age versus older adults spanning 60-75 years of age) on various dependent variables (e.g., memory, depression, life satisfaction). Of course, the primary limitation of using this design to study the effects of aging is that differences between the groups other than age may account for differences in the dependent variable. For instance, differences between the groups may reflect the generation that people come from (a cohort effect ) rather than a direct effect of age. For this reason, longitudinal studies , in which one group of people is followed over time as they age, offer a superior means of studying the effects of aging. However, longitudinal studies are by definition more time consuming and so require a much greater investment on the part of the researcher and the participants. A third approach, known as cross-sequential studies , combines elements of both cross-sectional and longitudinal studies. Rather than measuring differences between people in different age groups or following the same people over a long period of time, researchers adopting this approach choose a smaller period of time during which they follow people in different age groups. For example, they might measure changes over a ten year period among participants who at the start of the study fall into the following age groups: 20 years old, 30 years old, 40 years old, 50 years old, and 60 years old. This design is advantageous because the researcher reaps the immediate benefits of being able to compare the age groups after the first assessment. Further, by following the different age groups over time they can subsequently determine whether the original differences they found across the age groups are due to true age effects or cohort effects.

The types of research we have discussed so far are all quantitative, referring to the fact that the data consist of numbers that are analyzed using statistical techniques. But as you will learn in this chapter, many observational research studies are more qualitative in nature. In  qualitative research , the data are usually nonnumerical and therefore cannot be analyzed using statistical techniques. Rosenhan’s observational study of the experience of people in psychiatric wards was primarily qualitative. The data were the notes taken by the “pseudopatients”—the people pretending to have heard voices—along with their hospital records. Rosenhan’s analysis consists mainly of a written description of the experiences of the pseudopatients, supported by several concrete examples. To illustrate the hospital staff’s tendency to “depersonalize” their patients, he noted, “Upon being admitted, I and other pseudopatients took the initial physical examinations in a semi-public room, where staff members went about their own business as if we were not there” (Rosenhan, 1973, p. 256) [2] . Qualitative data has a separate set of analysis tools depending on the research question. For example, thematic analysis would focus on themes that emerge in the data or conversation analysis would focus on the way the words were said in an interview or focus group.

Internal Validity Revisited

Recall that internal validity is the extent to which the design of a study supports the conclusion that changes in the independent variable caused any observed differences in the dependent variable.  Figure 6.1 shows how experimental, quasi-experimental, and non-experimental (correlational) research vary in terms of internal validity. Experimental research tends to be highest in internal validity because the use of manipulation (of the independent variable) and control (of extraneous variables) help to rule out alternative explanations for the observed relationships. If the average score on the dependent variable in an experiment differs across conditions, it is quite likely that the independent variable is responsible for that difference. Non-experimental (correlational) research is lowest in internal validity because these designs fail to use manipulation or control. Quasi-experimental research (which will be described in more detail in a subsequent chapter) falls in the middle because it contains some, but not all, of the features of a true experiment. For instance, it may fail to use random assignment to assign participants to groups or fail to use counterbalancing to control for potential order effects. Imagine, for example, that a researcher finds two similar schools, starts an anti-bullying program in one, and then finds fewer bullying incidents in that “treatment school” than in the “control school.” While a comparison is being made with a control condition, the inability to randomly assign children to schools could still mean that students in the treatment school differed from students in the control school in some other way that could explain the difference in bullying (e.g., there may be a selection effect).

Figure 7.1 Internal Validity of Correlational, Quasi-Experimental, and Experimental Studies. Experiments are generally high in internal validity, quasi-experiments lower, and correlational studies lower still.

Notice also in  Figure 6.1 that there is some overlap in the internal validity of experiments, quasi-experiments, and correlational (non-experimental) studies. For example, a poorly designed experiment that includes many confounding variables can be lower in internal validity than a well-designed quasi-experiment with no obvious confounding variables. Internal validity is also only one of several validities that one might consider, as noted in Chapter 5.

  • Describe several strategies for recruiting participants for an experiment.
  • Explain why it is important to standardize the procedure of an experiment and several ways to do this.
  • Explain what pilot testing is and why it is important.

The information presented so far in this chapter is enough to design a basic experiment. When it comes time to conduct that experiment, however, several additional practical issues arise. In this section, we consider some of these issues and how to deal with them. Much of this information applies to non-experimental studies as well as experimental ones.

Recruiting Participants

Of course, at the start of any research project, you should be thinking about how you will obtain your participants. Unless you have access to people with schizophrenia or incarcerated juvenile offenders, for example, then there is no point designing a study that focuses on these populations. But even if you plan to use a convenience sample, you will have to recruit participants for your study.

There are several approaches to recruiting participants. One is to use participants from a formal  subject pool —an established group of people who have agreed to be contacted about participating in research studies. For example, at many colleges and universities, there is a subject pool consisting of students enrolled in introductory psychology courses who must participate in a certain number of studies to meet a course requirement. Researchers post descriptions of their studies and students sign up to participate, usually via an online system. Participants who are not in subject pools can also be recruited by posting or publishing advertisements or making personal appeals to groups that represent the population of interest. For example, a researcher interested in studying older adults could arrange to speak at a meeting of the residents at a retirement community to explain the study and ask for volunteers.

image

The Volunteer Subject

Even if the participants in a study receive compensation in the form of course credit, a small amount of money, or a chance at being treated for a psychological problem, they are still essentially volunteers. This is worth considering because people who volunteer to participate in psychological research have been shown to differ in predictable ways from those who do not volunteer. Specifically, there is good evidence that on average, volunteers have the following characteristics compared with non-volunteers (Rosenthal & Rosnow, 1976) [3] :

  • They are more interested in the topic of the research.
  • They are more educated.
  • They have a greater need for approval.
  • They have higher IQ.
  • They are more sociable.
  • They are higher in social class.

This difference can be an issue of external validity if there is a reason to believe that participants with these characteristics are likely to behave differently than the general population. For example, in testing different methods of persuading people, a rational argument might work better on volunteers than it does on the general population because of their generally higher educational level and IQ.

In many field experiments, the task is not recruiting participants but selecting them. For example, researchers Nicolas Guéguen and Marie-Agnès de Gail conducted a field experiment on the effect of being smiled at on helping, in which the participants were shoppers at a supermarket. A confederate walking down a stairway gazed directly at a shopper walking up the stairway and either smiled or did not smile. Shortly afterward, the shopper encountered another confederate, who dropped some computer diskettes on the ground. The dependent variable was whether or not the shopper stopped to help pick up the diskettes (Guéguen & de Gail, 2003) [4] . There are two aspects of this study that are worth addressing here. First, n otice that these participants were not “recruited,” which means that the IRB would have taken care to ensure that dispensing with informed consent in this case was acceptable (e.g., the situation would not have been expected to cause any harm and the study was conducted in the context of people’s ordinary activities). Second, even though informed consent was not necessary, the researchers still had to select participants from among all the shoppers taking the stairs that day. I t is extremely important that this kind of selection be done according to a well-defined set of rules that are established before the data collection begins and can be explained clearly afterward. In this case, with each trip down the stairs, the confederate was instructed to gaze at the first person he encountered who appeared to be between the ages of 20 and 50. Only if the person gazed back did they become a participant in the study. The point of having a well-defined selection rule is to avoid bias in the selection of participants. For example, if the confederate was free to choose which shoppers he would gaze at, he might choose friendly-looking shoppers when he was set to smile and unfriendly-looking ones when he was not set to smile. As we will see shortly, such biases can be entirely unintentional.

Standardizing the Procedure

It is surprisingly easy to introduce extraneous variables during the procedure. For example, the same experimenter might give clear instructions to one participant but vague instructions to another. Or one experimenter might greet participants warmly while another barely makes eye contact with them. To the extent that such variables affect participants’ behavior, they add noise to the data and make the effect of the independent variable more difficult to detect. If they vary systematically across conditions, they become confounding variables and provide alternative explanations for the results. For example, if participants in a treatment group are tested by a warm and friendly experimenter and participants in a control group are tested by a cold and unfriendly one, then what appears to be an effect of the treatment might actually be an effect of experimenter demeanor. When there are multiple experimenters, the possibility of introducing extraneous variables is even greater, but is often necessary for practical reasons.

Experimenter’s Sex as an Extraneous Variable

It is well known that whether research participants are male or female can affect the results of a study. But what about whether the  experimenter  is male or female? There is plenty of evidence that this matters too. Male and female experimenters have slightly different ways of interacting with their participants, and of course, participants also respond differently to male and female experimenters (Rosenthal, 1976) [5] .

For example, in a recent study on pain perception, participants immersed their hands in icy water for as long as they could (Ibolya, Brake, & Voss, 2004) [6] . Male participants tolerated the pain longer when the experimenter was a woman, and female participants tolerated it longer when the experimenter was a man.

Researcher Robert Rosenthal has spent much of his career showing that this kind of unintended variation in the procedure does, in fact, affect participants’ behavior. Furthermore, one important source of such variation is the experimenter’s expectations about how participants “should” behave in the experiment. This outcome is referred to as an  experimenter expectancy effect  (Rosenthal, 1976) [7] . For example, if an experimenter expects participants in a treatment group to perform better on a task than participants in a control group, then they might unintentionally give the treatment group participants clearer instructions or more encouragement or allow them more time to complete the task. In a striking example, Rosenthal and Kermit Fode had several students in a laboratory course in psychology train rats to run through a maze. Although the rats were genetically similar, some of the students were told that they were working with “maze-bright” rats that had been bred to be good learners, and other students were told that they were working with “maze-dull” rats that had been bred to be poor learners. Sure enough, over five days of training, the “maze-bright” rats made more correct responses, made the correct response more quickly, and improved more steadily than the “maze-dull” rats (Rosenthal & Fode, 1963) [8] . Clearly, it had to have been the students’ expectations about how the rats would perform that made the difference. But how? Some clues come from data gathered at the end of the study, which showed that students who expected their rats to learn quickly felt more positively about their animals and reported behaving toward them in a more friendly manner (e.g., handling them more).

The way to minimize unintended variation in the procedure is to standardize it as much as possible so that it is carried out in the same way for all participants regardless of the condition they are in. Here are several ways to do this:

  • Create a written protocol that specifies everything that the experimenters are to do and say from the time they greet participants to the time they dismiss them.
  • Create standard instructions that participants read themselves or that are read to them word for word by the experimenter.
  • Automate the rest of the procedure as much as possible by using software packages for this purpose or even simple computer slide shows.
  • Anticipate participants’ questions and either raise and answer them in the instructions or develop standard answers for them.
  • Train multiple experimenters on the protocol together and have them practice on each other.
  • Be sure that each experimenter tests participants in all conditions.

Another good practice is to arrange for the experimenters to be “blind” to the research question or to the condition in which each participant is tested. The idea is to minimize experimenter expectancy effects by minimizing the experimenters’ expectations. For example, in a drug study in which each participant receives the drug or a placebo, it is often the case that neither the participants nor the experimenter who interacts with the participants knows which condition they have been assigned to complete. Because both the participants and the experimenters are blind to the condition, this technique is referred to as a  double-blind study . (A single-blind study is one in which only the participant is blind to the condition.) Of course, there are many times this blinding is not possible. For example, if you are both the investigator and the only experimenter, it is not possible for you to remain blind to the research question. Also, in many studies, the experimenter  must  know the condition because they must carry out the procedure in a different way in the different conditions.

image

Record Keeping

It is essential to keep good records when you conduct an experiment. As discussed earlier, it is typical for experimenters to generate a written sequence of conditions before the study begins and then to test each new participant in the next condition in the sequence. As you test them, it is a good idea to add to this list basic demographic information; the date, time, and place of testing; and the name of the experimenter who did the testing. It is also a good idea to have a place for the experimenter to write down comments about unusual occurrences (e.g., a confused or uncooperative participant) or questions that come up. This kind of information can be useful later if you decide to analy z e sex differences or effects of different experimenters, or if a question arises about a particular participant or testing session.

Since participants' identities should be kept as confidential (or anonymous) as possible, their names and other identifying information should not be included with their data. In order to identify individual participants, it can, therefore, be useful to assign an identification number to each participant as you test them. Simply numbering them consecutively beginning with 1 is usually sufficient. This number can then also be written on any response sheets or questionnaires that participants generate, making it easier to keep them together.

Manipulation Check

In many experiments, the independent variable is a construct that can only be manipulated indirectly. For example, a researcher might try to manipulate participants’ stress levels indirectly by telling some of them that they have five minutes to prepare a short speech that they will then have to give to an audience of other participants. In such situations, researchers often include a manipulation check  in their procedure. A manipulation check is a separate measure of the construct the researcher is trying to manipulate. The purpose of a manipulation check is to confirm that the independent variable was, in fact, successfully manipulated. For example, researchers trying to manipulate participants’ stress levels might give them a paper-and-pencil stress questionnaire or take their blood pressure—perhaps right after the manipulation or at the end of the procedure—to verify that they successfully manipulated this variable.

Manipulation checks are particularly important when the results of an experiment turn out null. In cases where the results show no significant effect of the manipulation of the independent variable on the dependent variable, a manipulation check can help the experimenter determine whether the null result is due to a real absence of an effect of the independent variable on the dependent variable or if it is due to a problem with the manipulation of the independent variable. Imagine, for example, that you exposed participants to happy or sad movie music—intending to put them in happy or sad moods—but you found that this had no effect on the number of happy or sad childhood events they recalled. This could be because being in a happy or sad mood has no effect on memories for childhood events. But it could also be that the music was ineffective at putting participants in happy or sad moods. A manipulation check—in this case, a measure of participants’ moods—would help resolve this uncertainty. If it showed that you had successfully manipulated participants’ moods, then it would appear that there is indeed no effect of mood on memory for childhood events. But if it showed that you did not successfully manipulate participants’ moods, then it would appear that you need a more effective manipulation to answer your research question.

Manipulation checks are usually done at the end of the procedure to be sure that the effect of the manipulation lasted throughout the entire procedure and to avoid calling unnecessary attention to the manipulation (to avoid a demand characteristic). However, researchers are wise to include a manipulation check in a pilot test of their experiment so that they avoid spending a lot of time and resources on an experiment that is doomed to fail and instead spend that time and energy finding a better manipulation of the independent variable.

Pilot Testing

It is always a good idea to conduct a  pilot test  of your experiment. A pilot test is a small-scale study conducted to make sure that a new procedure works as planned. In a pilot test, you can recruit participants formally (e.g., from an established participant pool) or you can recruit them informally from among family, friends, classmates, and so on. The number of participants can be small, but it should be enough to give you confidence that your procedure works as planned. There are several important questions that you can answer by conducting a pilot test:

  • Do participants understand the instructions?
  • What kind of misunderstandings do participants have, what kind of mistakes do they make, and what kind of questions do they ask?
  • Do participants become bored or frustrated?
  • Is an indirect manipulation effective? (You will need to include a manipulation check.)
  • Can participants guess the research question or hypothesis (are there demand characteristics)?
  • How long does the procedure take?
  • Are computer programs or other automated procedures working properly?
  • Are data being recorded correctly?

Of course, to answer some of these questions you will need to observe participants carefully during the procedure and talk with them about it afterward. Participants are often hesitant to criticize a study in front of the researcher, so be sure they understand that their participation is part of a pilot test and you are genuinely interested in feedback that will help you improve the procedure. If the procedure works as planned, then you can proceed with the actual study. If there are problems to be solved, you can solve them, pilot test the new procedure, and continue with this process until you are ready to proceed.

Research Methods in Psychology Copyright © 2020 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, Dana C. Leighton & Molly A. Metz is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Write a Psychology Research Paper

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

example research report psychology

 James Lacy, MLS, is a fact-checker and researcher.

example research report psychology

Are you working on a psychology research paper this semester? Whether or not this is your first research paper, the entire process can seem a bit overwhelming at first. But, knowing where to start the research process can make things easier and less stressful.

While it can feel very intimidating, a research paper can initially be very intimidating, but it is not quite as scary if you break it down into more manageable steps. The following tips will help you break down the process into steps so it is easier to research and write your paper.

Decide What Kind of Paper You Are Going to Write

Before you begin, you should find out the type of paper your instructor expects you to write. There are a few common types of psychology papers that you might encounter.

Original Research or Lab Report

A report or empirical paper details research you conducted on your own. This is the type of paper you would write if your instructor had you perform your own psychology experiment. This type of paper follows a format similar to an APA format lab report. It includes a title page, abstract , introduction, method section, results section, discussion section, and references.

Literature Review

The second type of paper is a literature review that summarizes research conducted by other people on a particular topic. If you are writing a psychology research paper in this form, your instructor might specify the length it needs to be or the number of studies you need to cite. Student are often required to cite between 5 and 20 studies in their literature reviews and they are usually between 8 and 20 pages in length.

The format and sections of a literature review usually include an introduction, body, and discussion/implications/conclusions.

Literature reviews often begin by introducing the research question before narrowing the focus to the specific studies cited in the paper. Each cited study should be described in considerable detail. You should evaluate and compare the studies you cite and then offer your discussion of the implications of the findings.

Select an Idea for Your Research Paper

Hero Images / Getty Images

Once you have figured out the type of research paper you are going to write, it is time to choose a good topic . In many cases, your instructor may assign you a subject, or at least specify an overall theme on which to focus.

As you are selecting your topic, try to avoid general or overly broad subjects. For example, instead of writing a research paper on the general subject of attachment , you might instead focus your research on how insecure attachment styles in early childhood impact romantic attachments later in life.

Narrowing your topic will make writing your paper easier because it allows you to focus your research, develop your thesis, and fully explore pertinent findings.

Develop an Effective Research Strategy

As you find references for your psychology paper, take careful notes on the information you use and start developing a bibliography. If you stay organized and cite your sources throughout the writing process, you will not be left searching for an important bit of information you cannot seem to track back to the source.

So, as you do your research, make careful notes about each reference including the article title, authors, journal source, and what the article was about. 

Write an Outline

You might be tempted to immediately dive into writing, but developing a strong framework can save a lot of time, hassle, and frustration. It can also help you spot potential problems with flow and structure.

If you outline the paper right off the bat, you will have a better idea of how one idea flows into the next and how your research supports your overall hypothesis .

You should start the outline with the three most fundamental sections: the introduction, the body, and the conclusion. Then, start creating subsections based on your literature review. The more detailed your outline, the easier it will be to write your paper.

Draft, Revise, and Edit

Once you are confident in your outline, it is time to begin writing. Remember to follow APA format as you write your paper and include in-text citations for any materials you reference. Make sure to cite any information in the body of your paper in your reference section at the end of your document.

Writing a psychology research paper can be intimidating at first, but breaking the process into a series of smaller steps makes it more manageable. Be sure to start early by deciding on a substantial topic, doing your research, and creating a good outline . Doing these supporting steps ahead of time make it much easier to actually write the paper when the time comes.

  • Beins, BC & Beins, A. Effective Writing in Psychology: Papers, Posters, and Presentation. New York: Blackwell Publishing; 2011.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

  • Search This Site All UCSD Sites Faculty/Staff Search Term
  • Contact & Directions
  • Climate Statement
  • Cognitive Behavioral Neuroscience
  • Cognitive Psychology
  • Developmental Psychology
  • Social Psychology
  • Adjunct Faculty
  • Non-Senate Instructors
  • Researchers
  • Psychology Grads
  • Affiliated Grads
  • New and Prospective Students
  • Honors Program
  • Experiential Learning
  • Programs & Events
  • Psi Chi / Psychology Club
  • Prospective PhD Students
  • Current PhD Students
  • Area Brown Bags
  • Colloquium Series
  • Anderson Distinguished Lecture Series
  • Speaker Videos
  • Undergraduate Program
  • Academic and Writing Resources

Writing Research Papers

  • Research Paper Structure

Whether you are writing a B.S. Degree Research Paper or completing a research report for a Psychology course, it is highly likely that you will need to organize your research paper in accordance with American Psychological Association (APA) guidelines.  Here we discuss the structure of research papers according to APA style.

Major Sections of a Research Paper in APA Style

A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. 1  Many will also contain Figures and Tables and some will have an Appendix or Appendices.  These sections are detailed as follows (for a more in-depth guide, please refer to " How to Write a Research Paper in APA Style ”, a comprehensive guide developed by Prof. Emma Geller). 2

What is this paper called and who wrote it? – the first page of the paper; this includes the name of the paper, a “running head”, authors, and institutional affiliation of the authors.  The institutional affiliation is usually listed in an Author Note that is placed towards the bottom of the title page.  In some cases, the Author Note also contains an acknowledgment of any funding support and of any individuals that assisted with the research project.

One-paragraph summary of the entire study – typically no more than 250 words in length (and in many cases it is well shorter than that), the Abstract provides an overview of the study.

Introduction

What is the topic and why is it worth studying? – the first major section of text in the paper, the Introduction commonly describes the topic under investigation, summarizes or discusses relevant prior research (for related details, please see the Writing Literature Reviews section of this website), identifies unresolved issues that the current research will address, and provides an overview of the research that is to be described in greater detail in the sections to follow.

What did you do? – a section which details how the research was performed.  It typically features a description of the participants/subjects that were involved, the study design, the materials that were used, and the study procedure.  If there were multiple experiments, then each experiment may require a separate Methods section.  A rule of thumb is that the Methods section should be sufficiently detailed for another researcher to duplicate your research.

What did you find? – a section which describes the data that was collected and the results of any statistical tests that were performed.  It may also be prefaced by a description of the analysis procedure that was used. If there were multiple experiments, then each experiment may require a separate Results section.

What is the significance of your results? – the final major section of text in the paper.  The Discussion commonly features a summary of the results that were obtained in the study, describes how those results address the topic under investigation and/or the issues that the research was designed to address, and may expand upon the implications of those findings.  Limitations and directions for future research are also commonly addressed.

List of articles and any books cited – an alphabetized list of the sources that are cited in the paper (by last name of the first author of each source).  Each reference should follow specific APA guidelines regarding author names, dates, article titles, journal titles, journal volume numbers, page numbers, book publishers, publisher locations, websites, and so on (for more information, please see the Citing References in APA Style page of this website).

Tables and Figures

Graphs and data (optional in some cases) – depending on the type of research being performed, there may be Tables and/or Figures (however, in some cases, there may be neither).  In APA style, each Table and each Figure is placed on a separate page and all Tables and Figures are included after the References.   Tables are included first, followed by Figures.   However, for some journals and undergraduate research papers (such as the B.S. Research Paper or Honors Thesis), Tables and Figures may be embedded in the text (depending on the instructor’s or editor’s policies; for more details, see "Deviations from APA Style" below).

Supplementary information (optional) – in some cases, additional information that is not critical to understanding the research paper, such as a list of experiment stimuli, details of a secondary analysis, or programming code, is provided.  This is often placed in an Appendix.

Variations of Research Papers in APA Style

Although the major sections described above are common to most research papers written in APA style, there are variations on that pattern.  These variations include: 

  • Literature reviews – when a paper is reviewing prior published research and not presenting new empirical research itself (such as in a review article, and particularly a qualitative review), then the authors may forgo any Methods and Results sections. Instead, there is a different structure such as an Introduction section followed by sections for each of the different aspects of the body of research being reviewed, and then perhaps a Discussion section. 
  • Multi-experiment papers – when there are multiple experiments, it is common to follow the Introduction with an Experiment 1 section, itself containing Methods, Results, and Discussion subsections. Then there is an Experiment 2 section with a similar structure, an Experiment 3 section with a similar structure, and so on until all experiments are covered.  Towards the end of the paper there is a General Discussion section followed by References.  Additionally, in multi-experiment papers, it is common for the Results and Discussion subsections for individual experiments to be combined into single “Results and Discussion” sections.

Departures from APA Style

In some cases, official APA style might not be followed (however, be sure to check with your editor, instructor, or other sources before deviating from standards of the Publication Manual of the American Psychological Association).  Such deviations may include:

  • Placement of Tables and Figures  – in some cases, to make reading through the paper easier, Tables and/or Figures are embedded in the text (for example, having a bar graph placed in the relevant Results section). The embedding of Tables and/or Figures in the text is one of the most common deviations from APA style (and is commonly allowed in B.S. Degree Research Papers and Honors Theses; however you should check with your instructor, supervisor, or editor first). 
  • Incomplete research – sometimes a B.S. Degree Research Paper in this department is written about research that is currently being planned or is in progress. In those circumstances, sometimes only an Introduction and Methods section, followed by References, is included (that is, in cases where the research itself has not formally begun).  In other cases, preliminary results are presented and noted as such in the Results section (such as in cases where the study is underway but not complete), and the Discussion section includes caveats about the in-progress nature of the research.  Again, you should check with your instructor, supervisor, or editor first.
  • Class assignments – in some classes in this department, an assignment must be written in APA style but is not exactly a traditional research paper (for instance, a student asked to write about an article that they read, and to write that report in APA style). In that case, the structure of the paper might approximate the typical sections of a research paper in APA style, but not entirely.  You should check with your instructor for further guidelines.

Workshops and Downloadable Resources

  • For in-person discussion of the process of writing research papers, please consider attending this department’s “Writing Research Papers” workshop (for dates and times, please check the undergraduate workshops calendar).

Downloadable Resources

  • How to Write APA Style Research Papers (a comprehensive guide) [ PDF ]
  • Tips for Writing APA Style Research Papers (a brief summary) [ PDF ]
  • Example APA Style Research Paper (for B.S. Degree – empirical research) [ PDF ]
  • Example APA Style Research Paper (for B.S. Degree – literature review) [ PDF ]

Further Resources

How-To Videos     

  • Writing Research Paper Videos

APA Journal Article Reporting Guidelines

  • Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report . American Psychologist , 73 (1), 3.
  • Levitt, H. M., Bamberg, M., Creswell, J. W., Frost, D. M., Josselson, R., & Suárez-Orozco, C. (2018). Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report . American Psychologist , 73 (1), 26.  

External Resources

  • Formatting APA Style Papers in Microsoft Word
  • How to Write an APA Style Research Paper from Hamilton University
  • WikiHow Guide to Writing APA Research Papers
  • Sample APA Formatted Paper with Comments
  • Sample APA Formatted Paper
  • Tips for Writing a Paper in APA Style

1 VandenBos, G. R. (Ed). (2010). Publication manual of the American Psychological Association (6th ed.) (pp. 41-60).  Washington, DC: American Psychological Association.

2 geller, e. (2018).  how to write an apa-style research report . [instructional materials]. , prepared by s. c. pan for ucsd psychology.

Back to top  

  • Formatting Research Papers
  • Using Databases and Finding References
  • What Types of References Are Appropriate?
  • Evaluating References and Taking Notes
  • Citing References
  • Writing a Literature Review
  • Writing Process and Revising
  • Improving Scientific Writing
  • Academic Integrity and Avoiding Plagiarism
  • Writing Research Papers Videos

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

12.3 Expressing Your Results

Learning objectives.

  • Write out simple descriptive statistics in American Psychological Association (APA) style.
  • Interpret and create simple APA-style graphs—including bar graphs, line graphs, and scatterplots.
  • Interpret and create simple APA-style tables—including tables of group or condition means and correlation matrixes.

Once you have conducted your descriptive statistical analyses, you will need to present them to others. In this section, we focus on presenting descriptive statistical results in writing, in graphs, and in tables—following American Psychological Association (APA) guidelines for written research reports. These principles can be adapted easily to other presentation formats such as posters and slide show presentations.

Presenting Descriptive Statistics in Writing

When you have a small number of results to report, it is often most efficient to write them out. There are a few important APA style guidelines here. First, statistical results are always presented in the form of numerals rather than words and are usually rounded to two decimal places (e.g., “2.00” rather than “two” or “2”). They can be presented either in the narrative description of the results or parenthetically—much like reference citations. Here are some examples:

The mean age of the participants was 22.43 years with a standard deviation of 2.34.
Among the low self-esteem participants, those in a negative mood expressed stronger intentions to have unprotected sex ( M = 4.05, SD = 2.32) than those in a positive mood ( M = 2.15, SD = 2.27).
The treatment group had a mean of 23.40 ( SD = 9.33), while the control group had a mean of 20.87 ( SD = 8.45).
The test-retest correlation was .96.
There was a moderate negative correlation between the alphabetical position of respondents’ last names and their response time ( r = −.27).

Notice that when presented in the narrative, the terms mean and standard deviation are written out, but when presented parenthetically, the symbols M and SD are used instead. Notice also that it is especially important to use parallel construction to express similar or comparable results in similar ways. The third example is much better than the following nonparallel alternative:

The treatment group had a mean of 23.40 ( SD = 9.33), while 20.87 was the mean of the control group, which had a standard deviation of 8.45.

Presenting Descriptive Statistics in Graphs

When you have a large number of results to report, you can often do it more clearly and efficiently with a graph. When you prepare graphs for an APA-style research report, there are some general guidelines that you should keep in mind. First, the graph should always add important information rather than repeat information that already appears in the text or in a table. (If a graph presents information more clearly or efficiently, then you should keep the graph and eliminate the text or table.) Second, graphs should be as simple as possible. For example, the Publication Manual discourages the use of color unless it is absolutely necessary (although color can still be an effective element in posters, slide show presentations, or textbooks.) Third, graphs should be interpretable on their own. A reader should be able to understand the basic result based only on the graph and its caption and should not have to refer to the text for an explanation.

There are also several more technical guidelines for graphs that include the following:

  • The graph should be slightly wider than it is tall.
  • The independent variable should be plotted on the x- axis and the dependent variable on the y- axis.
  • Values should increase from left to right on the x- axis and from bottom to top on the y- axis.

Axis Labels and Legends

  • Axis labels should be clear and concise and include the units of measurement if they do not appear in the caption.
  • Axis labels should be parallel to the axis.
  • Legends should appear within the boundaries of the graph.
  • Text should be in the same simple font throughout and differ by no more than four points.
  • Captions should briefly describe the figure, explain any abbreviations, and include the units of measurement if they do not appear in the axis labels.
  • Captions in an APA manuscript should be typed on a separate page that appears at the end of the manuscript. See Chapter 11 “Presenting Your Research” for more information.

As we have seen throughout this book, bar graphs are generally used to present and compare the mean scores for two or more groups or conditions. The bar graph in Figure 12.12 “Sample APA-Style Bar Graph, With Error Bars Representing the Standard Errors, Based on Research by Ollendick and Colleagues” is an APA-style version of Figure 12.5 “Bar Graph Showing Mean Clinician Phobia Ratings for Children in Two Treatment Conditions” . Notice that it conforms to all the guidelines listed. A new element in Figure 12.12 “Sample APA-Style Bar Graph, With Error Bars Representing the Standard Errors, Based on Research by Ollendick and Colleagues” is the smaller vertical bars that extend both upward and downward from the top of each main bar. These are error bars , and they represent the variability in each group or condition. Although they sometimes extend one standard deviation in each direction, they are more likely to extend one standard error in each direction (as in Figure 12.12 “Sample APA-Style Bar Graph, With Error Bars Representing the Standard Errors, Based on Research by Ollendick and Colleagues” ). The standard error is the standard deviation of the group divided by the square root of the sample size of the group. The standard error is used because, in general, a difference between group means that is greater than two standard errors is statistically significant. Thus one can “see” whether a difference is statistically significant based on a bar graph with error bars.

Figure 12.12 Sample APA-Style Bar Graph, With Error Bars Representing the Standard Errors, Based on Research by Ollendick and Colleagues

Sample APA-Style Bar Graph, With Error Bars Representing the Standard Errors, Based on Research by Ollendick and Colleagues

Line Graphs

Line graphs are used to present correlations between quantitative variables when the independent variable has, or is organized into, a relatively small number of distinct levels. Each point in a line graph represents the mean score on the dependent variable for participants at one level of the independent variable. Figure 12.13 “Sample APA-Style Line Graph Based on Research by Carlson and Conard” is an APA-style version of the results of Carlson and Conard. Notice that it includes error bars representing the standard error and conforms to all the stated guidelines.

Figure 12.13 Sample APA-Style Line Graph Based on Research by Carlson and Conard

Sample APA-Style Line Graph Based on Research by Carlson and Conard

In most cases, the information in a line graph could just as easily be presented in a bar graph. In Figure 12.13 “Sample APA-Style Line Graph Based on Research by Carlson and Conard” , for example, one could replace each point with a bar that reaches up to the same level and leave the error bars right where they are. This emphasizes the fundamental similarity of the two types of statistical relationship. Both are differences in the average score on one variable across levels of another. The convention followed by most researchers, however, is to use a bar graph when the variable plotted on the x- axis is categorical and a line graph when it is quantitative.

Scatterplots

Scatterplots are used to present relationships between quantitative variables when the variable on the x- axis (typically the independent variable) has a large number of levels. Each point in a scatterplot represents an individual rather than the mean for a group of individuals, and there are no lines connecting the points. The graph in Figure 12.14 “Sample APA-Style Scatterplot” is an APA-style version of Figure 12.8 “Statistical Relationship Between Several College Students’ Scores on the Rosenberg Self-Esteem Scale Given on Two Occasions a Week Apart” , which illustrates a few additional points. First, when the variables on the x- axis and y -axis are conceptually similar and measured on the same scale—as here, where they are measures of the same variable on two different occasions—this can be emphasized by making the axes the same length. Second, when two or more individuals fall at exactly the same point on the graph, one way this can be indicated is by offsetting the points slightly along the x- axis. Other ways are by displaying the number of individuals in parentheses next to the point or by making the point larger or darker in proportion to the number of individuals. Finally, the straight line that best fits the points in the scatterplot, which is called the regression line, can also be included.

Figure 12.14 Sample APA-Style Scatterplot

Sample APA-Style Scatterplot

Expressing Descriptive Statistics in Tables

Like graphs, tables can be used to present large amounts of information clearly and efficiently. The same general principles apply to tables as apply to graphs. They should add important information to the presentation of your results, be as simple as possible, and be interpretable on their own. Again, we focus here on tables for an APA-style manuscript.

The most common use of tables is to present several means and standard deviations—usually for complex research designs with multiple independent and dependent variables. Figure 12.15 “Sample APA-Style Table Presenting Means and Standard Deviations” , for example, shows the results of a hypothetical study similar to the one by MacDonald and Martineau (2002) discussed in Chapter 5 “Psychological Measurement” . (The means in Figure 12.15 “Sample APA-Style Table Presenting Means and Standard Deviations” are the means reported by MacDonald and Martineau, but the standard errors are not). Recall that these researchers categorized participants as having low or high self-esteem, put them into a negative or positive mood, and measured their intentions to have unprotected sex. Although not mentioned in Chapter 5 “Psychological Measurement” , they also measured participants’ attitudes toward unprotected sex. Notice that the table includes horizontal lines spanning the entire table at the top and bottom, and just beneath the column headings. Furthermore, every column has a heading—including the leftmost column—and there are additional headings that span two or more columns that help to organize the information and present it more efficiently. Finally, notice that APA-style tables are numbered consecutively starting at 1 (Table 1, Table 2, and so on) and given a brief but clear and descriptive title.

Figure 12.15 Sample APA-Style Table Presenting Means and Standard Deviations

Sample APA-Style Table Presenting Means and Standard Deviations

Another common use of tables is to present correlations—usually measured by Pearson’s r —among several variables. This is called a correlation matrix . Figure 12.16 “Sample APA-Style Table (Correlation Matrix) Based on Research by McCabe and Colleagues” is a correlation matrix based on a study by David McCabe and colleagues (McCabe, Roediger, McDaniel, Balota, & Hambrick, 2010). They were interested in the relationships between working memory and several other variables. We can see from the table that the correlation between working memory and executive function, for example, was an extremely strong .96, that the correlation between working memory and vocabulary was a medium .27, and that all the measures except vocabulary tend to decline with age. Notice here that only half the table is filled in because the other half would have identical values. For example, the Pearson’s r value in the upper right corner (working memory and age) would be the same as the one in the lower left corner (age and working memory). The correlation of a variable with itself is always 1.00, so these values are replaced by dashes to make the table easier to read.

Figure 12.16 Sample APA-Style Table (Correlation Matrix) Based on Research by McCabe and Colleagues

Sample APA-Style Table (Correlation Matrix) Based on Research by McCabe and Colleagues

As with graphs, precise statistical results that appear in a table do not need to be repeated in the text. Instead, the writer can note major trends and alert the reader to details (e.g., specific correlations) that are of particular interest.

Key Takeaways

  • In an APA-style article, simple results are most efficiently presented in the text, while more complex results are most efficiently presented in graphs or tables.
  • APA style includes several rules for presenting numerical results in the text. These include using words only for numbers less than 10 that do not represent precise statistical results, and rounding results to two decimal places, using words (e.g., “mean”) in the text and symbols (e.g., “ M ”) in parentheses.
  • APA style includes several rules for presenting results in graphs and tables. Graphs and tables should add information rather than repeating information, be as simple as possible, and be interpretable on their own with a descriptive caption (for graphs) or a descriptive title (for tables).
  • Practice: In a classic study, men and women rated the importance of physical attractiveness in both a short-term mate and a long-term mate (Buss & Schmitt, 1993). The means and standard deviations are as follows. Men / Short Term: M = 5.67, SD = 2.34; Men / Long Term: M = 4.43, SD = 2.11; Women / Short Term: M = 5.67, SD = 2.48; Women / Long Term: M = 4.22, SD = 1.98. Present these results (a) in writing, (b) in a graph, and (c) in a table.

Buss, D. M., & Schmitt, D. P. (1993). Sexual strategies theory: A contextual evolutionary analysis of human mating. Psychological Review, 100 , 204–232.

MacDonald, T. K., & Martineau, A. M. (2002). Self-esteem, mood, and intentions to use condoms: When does low self-esteem lead to risky health behaviors? Journal of Experimental Social Psychology, 38 , 299–306.

McCabe, D. P., Roediger, H. L., McDaniel, M. A., Balota, D. A., & Hambrick, D. Z. (2010). The relationship between working memory capacity and executive functioning. Neuropsychology, 243 , 222–243.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Welcome to the Purdue Online Writing Lab

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

The Online Writing Lab at Purdue University houses writing resources and instructional material, and we provide these as a free service of the Writing Lab at Purdue. Students, members of the community, and users worldwide will find information to assist with many writing projects. Teachers and trainers may use this material for in-class and out-of-class instruction.

The Purdue On-Campus Writing Lab and Purdue Online Writing Lab assist clients in their development as writers—no matter what their skill level—with on-campus consultations, online participation, and community engagement. The Purdue Writing Lab serves the Purdue, West Lafayette, campus and coordinates with local literacy initiatives. The Purdue OWL offers global support through online reference materials and services.

A Message From the Assistant Director of Content Development 

The Purdue OWL® is committed to supporting  students, instructors, and writers by offering a wide range of resources that are developed and revised with them in mind. To do this, the OWL team is always exploring possibilties for a better design, allowing accessibility and user experience to guide our process. As the OWL undergoes some changes, we welcome your feedback and suggestions by email at any time.

Please don't hesitate to contact us via our contact page  if you have any questions or comments.

All the best,

Social Media

Facebook twitter.

Italian validation of the mentalization scale (MentS)

  • Open access
  • Published: 08 May 2024

Cite this article

You have full access to this open access article

example research report psychology

  • Marina Cosenza   ORCID: orcid.org/0000-0002-5813-017X 1 ,
  • Barbara Pizzini   ORCID: orcid.org/0000-0001-5234-4823 2 , 3 ,
  • Mariagiulia Sacco   ORCID: orcid.org/0000-0002-8587-7287 1 ,
  • Francesca D’Olimpio   ORCID: orcid.org/0000-0002-9669-0127 1 ,
  • Alda Troncone   ORCID: orcid.org/0000-0002-4641-6314 1 ,
  • Maria Ciccarelli   ORCID: orcid.org/0000-0002-7285-1707 1 ,
  • Aleksandar Dimitrijević   ORCID: orcid.org/0000-0001-6034-9595 4 &
  • Giovanna Nigro   ORCID: orcid.org/0000-0003-3518-2468 1  

107 Accesses

1 Altmetric

Explore all metrics

The research aimed to assess the reliability, factor structure, and validity of the Italian adaptation of the Mentalization Scale (MentS), a 28-item self-report questionnaire that measures mentalization across three dimensions. The psychometric properties of the Italian version were examined in two studies with large samples of adults and adolescents. The first study (Study 1) aimed to evaluate, through exploratory and confirmatory factor analyses, the construct validity of the Italian version of the MentS in adolescents ( N  = 618) and adults ( N  = 720). The second study (Study 2) was undertaken to test the convergent validity and temporal stability of the Italian version of the MentS. Specifically, the study assessed the relationship between the MentS and scores on the Reflective Functioning Questionnaire (RFQ-8), one of the most widely used instruments to assess mentalization, in a large sample of high-school students ( N  = 472). Furthermore, the study evaluated the 4-week test-retest reliability of the instrument in a sample of undergraduates ( N  = 128). The questionnaire exhibited strong internal consistency across both adult and adolescent samples, with Cronbach’s alphas ranging from 0.71 to 0.83. Exploratory and confirmatory factor analyses consistently identified three correlated underlying factors within both age groups, demonstrating the robust factor structure of the Italian version of the MentS. Furthermore, the tool demonstrated strong convergent validity with the RFQ-8 and acceptable test-retest reliability over a 4-week period. These findings provide compelling evidence supporting the Italian version of the MentS as a reliable and valid self-report measure for comprehensively assessing different facets of mentalization.

Similar content being viewed by others

example research report psychology

The Efficient Assessment of Self-Esteem: Proposing the Brief Rosenberg Self-Esteem Scale

example research report psychology

Emotional Well-Being: What It Is and Why It Matters

example research report psychology

Brief Resilience Scale (BRS)

Avoid common mistakes on your manuscript.

Introduction

In recent decades, mentalization (or mentalizing), also referred to as reflective functioning (RF), has emerged as a prominent empirical topic, steadily garnering heightened attention and interest. The term “mentalization” was first used in the 1960s and 1970s as a clinical psychoanalytic concept mostly related to psychosomatic states and conditions (Bion, 1962 ; Marty, 1991 ). More recently, it was defined as “the mental process by which an individual implicitly and explicitly interprets the actions of himself and others as meaningful based on intentional mental states such as personal desires, needs, feelings, beliefs, and reasons” (Bateman & Fonagy, 2004 , p. XXI). This process is regarded as pivotal in both emotional and cognitive development. It is closely linked to issues such as aggressiveness, delinquency, substance abuse, and various mental disorders (for reviews, see Luyten et al., 2020 , Johnson et al., 2022 ; see also Chevalier et al., 2023 ).

During the early 1990s, Peter Fonagy and his colleagues developed a tool aimed at assessing individuals’ capacity to reflect on their attachment experiences. This tool, known as the Reflective Function Scale (RFS; Fonagy et al., 1998 ), remains the gold standard in mentalization research, renowned for its remarkable reliability, validity, and well-established structure (Tauber et al., 2013 ). As stressed by some authors (e.g., Hüwe et al., 2023 ), the RF scale is very demanding, as it requires training and certification. The Reflective Functioning Questionnaire (RFQ; Fonagy et al., 2016 ) was expected to address these concerns, offering promise due to its concise format of only eight items and the convenient scoring system available online.

Recent research by Müller et al. ( 2022 ) highlights significant issues with the validity of the RFQ-8. While it aims to assess individuals’ ability to understand both their own and others’ behavior through intentional mental states, it predominantly focuses on self-understanding rather than providing a comprehensive view of both self and others. Consequently, the RFQ-8 fails to capture the multifaceted nature of mentalization adequately. Moreover, its two subscales only address extremes of certainty and uncertainty, representing a limited portion of the broader mentalization spectrum. Further analysis reveals a unifactorial structure, lacking a bifactorial composition. In response, Horváth et al. ( 2023 ) introduced the RFQ-7, offering a streamlined questionnaire and innovative scoring system to address these limitations. This unidimensional tool includes a dimension of hypomentalization Footnote 1 , spanning from low to high uncertainty levels.

A large body of studies demonstrated that poor mentalization is associated with several mental disorders, including borderline and antisocial personality disorders (Fonagy et al., 2016 ; Perroud et al., 2017 ), depression (Luyten et al., 2012 ), and eating disorders (Pedersen et al., 2015 ; Skårderud, 2007a , b ; see also Fonagy et al., 2016 ). Additionally, a deficit in the capacity to “hold mind in mind” is associated, among others, with substance abuse (Allen et al., 2008 ; Lecointe et al., 2016 ; Möller et al., 2017 ; Suchman et al., 2018 ), gambling disorder (Ciccarelli et al., 2021 , 2022a , b ; Cosenza et al., 2019 ; Lindberg et al., 2011 ; Nigro et al., 2019 ; Spada & Roarty, 2015 ), as well as with other forms of out-of-control behaviors, such as sexual (Berry & Berry, 2014 ) and food addiction (Innamorati et al., 2017 ).

However, it remains unclear to what degree these associations specifically indicate a deficit in self-understanding rather than a broader deficiency in mentalizing. The RFQ-8, with its focus predominantly on self-awareness, save for one item, leaves ambiguity regarding whether the aforementioned connections primarily signify a lack of self-comprehension or a general impairment in mentalizing abilities (Müller et al., 2022 ; see also Müller et al., 2023 ).

Other self-report measures assessing mentalization have emerged concurrently with the Reflective Functioning Questionnaire. The first was the Mentalization Questionnaire (MZQ; Hausberg et al., 2012 ), a 15-item scale featuring four subscales. However, its reliability ranges between 0.57 and 0.68, and its Italian translation diverged notably from the original version, indicating a potentially unidimensional structure rather than the intended four-dimensional framework (as observed in Ponti et al., 2019 ). Subsequently, the Mentalization Scale (MentS; Dimitrijevic et al., 2018 ) gained widespread international use and consistently demonstrated robust performance with minimal observed shortcomings.

The Mentalization Scale (MentS) consists of 28 self-report items, utilizing a 5-point Likert scale from completely agree to completely disagree . Elevated scores indicate a more advanced capacity for mentalization. Typically, respondents take approximately 10 min to complete the assessment.

In the extensive community sample utilized for validation, MentS showcased robust reliability (Cronbach’s Alpha = 0.84), with subscale reliabilities at 0.76 (MentS-S) and 0.77 (MentS-O and MentS-M). It demonstrated commendable whole-scale reliability and strong convergent-discriminant validity by exhibiting meaningful correlations with related constructs and fundamental personality traits. Moreover, the scale effectively differentiated between individuals with borderline personality disorder and controls, revealing significant distinctions. While the clinical sample exhibited acceptable internal consistencies across all subscale scores, MentS-M presented a deviation (for specific details, refer to Dimitrijevic et al., 2018 ).

Since its introduction in 2018, the MentS scale has garnered considerable attention, prompting validation studies across various linguistic contexts. These studies have encompassed translations into Chinese (Wen et al., 2022 ), Farsi (Ahmadian & Ghamarani, 2021 ), Persian, and Iranian (Ahmadian & Ghamarani, 2021 ; Asgarizadeh et al., 2023 ), as well as Japanese (Matsuba et al., 2022 ), Korean (Surim & Munhee, 2018 ), Polish (Jańczak, 2021 ), and Turkish (Törenli Kaya et al., 2023 ). Currently, efforts are underway for translations into Catalan, German, and Spanish.

Furthermore, the scale has been employed in various peer-reviewed research studies conducted across multiple languages, including French (Francoeur et al., 2020 ), Hindi (Bhola & Mehrotra, 2021 ), Hungarian (Fekete et al., 2019 ), Lithuanian (Gervinskaitė-Paulaitienė et al., 2023 ), Norwegian (Brattland et al., 2022 ), and Serbian (Berleković & Dimitrijević, 2020 ).

Several studies suggest that the MentS serves as a valid and reliable self-report tool for assessing mentalization. Its effectiveness extends to efficiently assessing sizable community samples and proving advantageous in clinical research. Across these studies, the three-factor structure of the MentS consistently emerged, with a few individual items occasionally loading on unintended factors. Test-retest reliability coefficients typically ranged from 0.68 to 0.85. Cronbach’s alphas for the overall scale varied between 0.73 and 0.86, except for the Turkish version where it was 0.63. Subscale scores ranged predictably lower yet remained between 0.74 and 0.80. Notably, the correlation between scores from the Reflective Function Scale (RFS) and MentS was 0.65 ( p  < 0.01), indicating a significant relationship. Moreover, correlations between individual subscales were observed to vary between 0.41 and 0.56, remaining statistically significant across all three cases (Richter et al., 2021 ).

Considering the substantial evidence supporting its robustness and utility, we opted to validate the MentS in Italian. This paper presents comprehensive details of our validation study.

Overview of studies

The current research aimed to explore the psychometric properties of the Italian adaptation of the MentS through two studies. Initially, the MentS underwent translation into Italian, following the meticulous procedure outlined by Beaton et al. ( 2000 ), involving forward and backward translation, as well as pilot testing. Participants were drawn from both adult and adolescent populations.

Study 1 focused on assessing the construct validity of the Italian version of the MentS in adolescents and adults, utilizing exploratory and confirmatory factor analyses. In Study 2, the convergent validity and temporal stability of the Italian MentS were examined. Specifically, Study 2 delved into evaluating the correlation between the MentS and the Reflective Functioning Questionnaire (RFQ-8; Fonagy et al., 2016 ) in a substantial cohort of high-school students. Additionally, it gauged the 4-week test-retest reliability of the instrument among undergraduates. For both studies, we have reported the actual number of participants. Incomplete questionnaires (approximately 2% for both samples) were excluded from the final samples.

Consistent with the original MentS version, our expectation was to reproduce the scale’s three-dimensional structure and to observe gender differences in both adult and adolescent samples. In addition, we expected significant correlation between MentS and RFQ-8 scores among the adolescent sample.

All studies were carried out in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the Department of Psychology of the first author’s university. Before participation, all subjects provided informed consent. For minors, informed consent was obtained from parents.

Participants

In Study 1, a total of 1338 participants from both adolescent and adult cohorts were involved. The adolescent subset encompassed 618 high school students (48.5% boys; M age = 17.67; SD  = 0.53) attending various public high schools, including lyceums and technical and trade schools in Southern Italy. These participants were randomly divided into two groups of equal size, with the first group (151 boys and 158 girls; M age = 17.61; SD  = 0.70) used for the exploratory factor analysis (EFA) and the second group (149 boys and 160 girls; M age = 17.73; SD  = 0.65) for the confirmatory factor analysis (CFA).

The adult cohort consisted of 720 volunteers (42.4% men), ranging in age from 20 to 65 years ( M age = 38.28; SD  = 14.69), recruited from a community-based population. Like the adolescent group, this sample was randomly split into two equivalent groups. The first adult group (151 males and 209 females; M age = 39.17; SD  = 14.36) underwent exploratory factor analysis (EFA), while the second group (154 males and 206 females; M age = 37.40 years; SD  = 14.98) participated in the confirmatory factor analysis (CFA).

Inizio modulo.

The Mentalization Scale . The MentS is structured into three distinct subscales, namely: The Self-related Mentalization subscale (MentS-S), the Other-related Mentalization subscale (MentS-O), and the Motivation to Mentalize subscale (MentS-M).

The MentS-S scale comprises eight items that center on the individual’s perception of their ability to comprehend their own mental states (e.g., 18. “I find it difficult to admit to myself that I am sad, hurt, or afraid”; 22. “It is difficult for me to find adequate words to express my feelings). The MentS-O dimension consists of ten items aimed at gauging the individual’s confidence in understanding the mental states of others (e.g., 10. “I can make good predictions of other people’s behavior when I know their beliefs and feelings”; 20. “I can describe significant traits of people who are close to me with precision and in detail”). Finally, the MentS-M subscale encompasses ten items aimed at assessing the individual’s inclination towards utilizing their capacity for mentalizing and how significant this mentalizing ability is to them (e.g., 7. “When someone annoys me, I try to understand why I react in that way”; 17. “I like reading books and newspaper articles about psychological subjects”).

Statistical analyses

Data analyses were performed using IBM SPSS version 29.0. The significance threshold was set at p  < 0.05. Initially, all variables underwent scrutiny for missing data, distribution irregularities, and outlier identification. Univariate analysis of variance (ANOVA) was employed to examine gender differences in the data.

For both the adolescent and adult samples, scores obtained from the Italian version of the MentS underwent a principal components analysis followed by Oblimin rotation with Kaiser normalization. Before conducting the analyses, three key indices, as recommended by Field ( 2013 ), were assessed to ensure the data’s suitability for factor analysis. These included the Kaiser-Meyer-Olkin measure (KMO) for sampling adequacy, the determination of the correlation matrix to detect multicollinearity, and the Bartlett’s test of sphericity. Bartlett’s test specifically evaluates the null hypothesis that the original correlation matrix is an identity matrix (Field, 2013 , p. 695). Confirmatory factor analysis was carried out utilizing the Eq. 6.2 software program designed for structural equation modeling, as detailed by Bentler ( 2008 ).

Initially, to explore potential gender-based differences in MentS scores, univariate ANOVA was employed. As expected, results from both the adolescent and adult samples revealed noteworthy disparities: males attained significantly higher scores in the Self dimension, whereas females exhibited superior performance in the Others and Motivation dimensions, along with the overall MentS score.

Table  1 displays descriptive statistics for the entire samples as well as breakdowns by gender, along with Cronbach’s alpha values and the results of the univariate ANOVA. The reliability of the MentS subscales was confirmed for both the adolescent and adult samples, as evidenced by the Cronbach’s alpha values reported in Table  1 .

It is important to note that we initially calculated Cronbach’s alpha coefficients to facilitate comparisons with the original version of the MentS and its subsequent adaptations. However, to obtain a more refined and accurate measure of reliability, we also computed the omega coefficient ( ω ; McDonald, 1999 ) for each subscale and the total score. It is common knowledge that this coefficient goes beyond Cronbach’s alpha by incorporating both item factor loadings and uniqueness, resulting in a more nuanced and precise estimation of reliability. As Table  1 shows, the values of internal consistency as measured by omega coefficients were good for the full scale and the subscales of the MentS.

Exploratory factor analysis

In both samples, the Kaiser-Meyer-Olkin (KMO) values were notably high (Adolescents = 0.822; Adults = 0.861). The determinant of the correlation matrix was consistent at 0.001 for both groups, and Bartlett’s test of sphericity returned significant results (Adolescents: χ 2 (378) = 2016.64; p  < 0.001; Adults: χ 2 (378) = 2378.76; p  < 0.001). These outcomes signified sufficiently large correlations, supporting the suitability of the data for Principal Component Analysis (PCA).

In each case, the determination of retained factors relied on parallel analysis, conducted using the SPSS syntax developed by O’Connor ( 2000 ). Parallel analysis consistently indicated a three-component solution as the most appropriate for both samples.

In the adolescent sample, three factors collectively explained 35.95% of the variance. The first factor accounted for 18.29% of the variance, the second factor accounted for 11.55%, while the third factor explained 6.11% of the variance. Table  2 presents individual item loadings on these retained components. Notably, the first factor encompassed the ten items of the MentS Others subscale, the second factor comprised the eight items of the MentS Self scale, and the third factor consisted of the ten items of the MentS Motivation scale.

Regarding the adult sample, the three-factor solution accounted for a cumulative variance of 36.07%. The distribution of variance across these factors was as follows: the first factor explained 21.41%, the second factor accounted for 9.27%, and the third factor elucidated 5.39%. These relationships are detailed in Table  3 , which displays the subscale loadings across the three dimensions. Factor loadings distinctly revealed the composition of each factor: the first factor encapsulated the ten items of the MentS Others subscale, the second factor encompassed the eight items of the MentS Self scale, and the third factor included the ten items of the MentS Motivation dimension.

Confirmatory factor analysis

Confirmatory Factor Analysis (CFA) utilizing maximum likelihood estimation was employed to examine the reproducibility of the proposed factor structure outlined by Dimitrijević et al. ( 2018 ) in adolescent and adult Italian samples.

In both samples, three models underwent testing. The initial model was a one-factor structure where all items were anticipated to load onto a single factor. The second model consisted of three factors with no correlation among them, while the third model allowed for intercorrelation between the factors.

Each model’s goodness of fit was assessed using various measures: the likelihood ratio chi-square test statistic, adjusted for data nonnormality using Satorra and Bentler’s method (1994; S-B χ 2 ), alongside four descriptive fit indices: standardized root-mean-square residual (SRMR), root-mean-square error of approximation (RMSEA) with its 90% confidence interval (90% CI), goodness of fit index (GFI), and comparative fit index (CFI). Considering the sensitivity of the χ 2 statistic to sample size (MacCallum, 1990 ; Marsh et al., 1988 ), interpretations of model fit were guided by a range of fit indices. Adequate model fit was identified by a non-significant S-B χ 2 , GFI, and CFI values of 0.90 or higher, as well as an RMSEA less than 0.08.

Table  4 presents the model fit statistics for the three models across both groups. The model exhibiting the highest GFI and CFI estimates while displaying the lowest RMSEA and SRMR values was considered the most suitable or best-fitting model.

The study aimed to assess the reliability of the MentS and examine its factor structure within substantial cohorts of adolescents and adults. Reliability analysis demonstrated that the MentS subscales exhibit good internal consistency. Moreover, outcomes from exploratory factor analysis notably indicated that the three-factor model appropriately captured a substantial proportion of variance, reflected in strong factor loadings.

In line with Dimitrijević et al. ( 2018 ), both exploratory and confirmatory factor analyses consistently supported a three-factor structure for the Italian version of the MentS across adolescent and adult populations. Additionally, as anticipated, gender differences in MentS scores were observed in both samples. Specifically, males attained significantly higher scores in the Self dimension, while females reported higher scores on the Others and Motivation dimensions, as well as on the overall MentS score.

Study 2 was undertaken to test the convergent validity and temporal stability of the Italian version of the MentS. A large sample of adolescents were administered the MentS and the Reflective Functioning Questionnaire (RFQ-8; Fonagy et al., 2016 ; Italian version for adolescents: Cosenza et al., 2019 ; see also, Bizzi et al., 2022 ). Furthermore, the test-retest reliability of the instrument was evaluated using a 4-week interval between measurements on a sample of undergraduate students.

Four hundred and seventy-two adolescents (44.1% males), aged between 16 and 19 years ( Mean age  = 17.63; SD  = 0.72), participated in this study. They were administered the Italian versions of the MentS and the RFQ-8. The RFQ-8, an eight-item self-rating questionnaire, is specifically designed to assess reflective functioning. Respondents rate items on a seven-point Likert scale, ranging from 1 ( strongly disagree ) to 7 ( strongly agree ). The questionnaire comprises two subscales that tap into distinct mental processes: Certainty about mental states (RFQ_C) and Uncertainty about mental states (RFQ_U). Low agreement on the RFQ_C scale denotes a tendency toward excessive yet inaccurate mentalizing ( hypermentalizing ), while higher agreement signifies a more authentic mentalizing approach. Similarly, very high scores on the RFQ_U indicate a near absence of knowledge about mental states ( hypomentalizing ), whereas lower scores reflect recognition of the complexity of one’s own and others’ mental states, indicative of genuine mentalizing.

Zero-order correlations between the three dimensions of the MentS and the two subscales of the RFQ-8 were computed.

Additionally, a new sample of 128 undergraduates (24.2% males), aged between 20 and 29 years ( Mean age  = 21.22; SD  = 1.56), completed the MentS twice to assess the scale’s 4-week test-retest reliability.

Results showed a strong positive correlation between MentS-Self and RFQ-8 Certainty scale scores ( r  = 0.44; p  < 0.001), as well as a significant negative association between MentS-Self and RFQ-8 Uncertainty scale ( r = -0.43; p  < 0.001).

As for the temporal stability, the Italian version of MentS demonstrated an acceptable 4-week test-retest reliability for the three dimensions of the instrument, as well as for the full scale (MentS-Self: r  = 0.63; p  < 0.001; MentS-Others: r  = 0.65; p  < 0.001; MentS-Motivation: r  = 0.63; p  < 0.001; MentS full scale: r  = 0.83; p  < 0.001).

Study 2 aimed to assess both the convergent validity and temporal stability of the Italian version of the MentS among adolescents and undergraduates, respectively. The results highlighted the scale’s good convergent validity with the Reflective Functioning Questionnaire (RFQ-8) and demonstrated reliable (4-week) test-retest reliability.

While the RFQ-8 is designed to assess an individual’s capacity to understand intentional mental states within themselves and others (Fonagy et al., 2012 ; Luyten et al., 2020 ), our study revealed a strong correlation between RFQ-8 scores and MentS-Self scores. This outcome underscores a distinct connection between reflective functioning and self-awareness, while revealing no such association with other dimensions of the MentS. These findings suggest that the RFQ-8 may particularly emphasize the comprehension of one’s own mental states rather than those of others. This aligns with previous research (e.g., Dimitrijević et al., 2018 ; Müller et al., 2022 , 2023 ), highlighting the significance of introspective abilities in reflective functioning. Additional support for this notion can be found in the work of Li, Carragher, and Bird ( 2020 ).

General discussion

In recent decades, mentalization (also known as mentalizing) has surged as a prominent empirical field, steadily gaining heightened attention and interest. Imbalance in the ability to perceive and interpret both the self and others’ behavior in terms of intentional mental states, such as thoughts, feelings, desires, wishes, goals, and attitudes (Fonagy et al., 2012 ), has received significant attention over the past years (for a review, see Luyten et al., 2020 ).

Research exploring the significance of mentalization in psychopathology is rapidly expanding, reflecting an increasing interest in comprehending its implications. The present studies contributed to this ongoing line of research by developing and testing an Italian version of the MentS scale, a 28-item self-report measure of mentalization.

An initial measurement study (Study 1) employing exploratory and confirmatory factor analyses on large samples of adolescents and adults revealed support for the three-correlated factors model postulated by Dimitrijević et al. ( 2018 ). Study 2 was devoted to testing the convergent validity and temporal stability of the MentS. The results obtained from a sample of adolescents demonstrated that the MentS shows a good convergent validity with the Reflective Functioning Questionnaire (RFQ-8). In addition, results from a sample of undergraduates showed that the Italian version of MentS demonstrates good test-retest reliability for the three dimensions of the instrument and the full scale.

Notably, in all studies, we observed significant differences in MentS scores due to gender. In both adult and adolescent samples, male participants scored significantly higher on the dimension MentS-Self, but significantly lower on the subscales Motivation and Other, respectively, as well as on the MentS total score. This outcome aligns seamlessly with the conclusions drawn by Dimitrijević et al. ( 2018 ), which highlighted a superior proficiency in understanding one’s mental states among males. Conversely, females exhibited greater confidence in grasping the mental states of others and demonstrated their need to understand the psychic world of self and others. Our findings suggest that gender affects mentalization, albeit in a differentiated manner depending on the specific dimension under consideration. The utilization of a multidimensional measurement approach enabled the capture of crucial nuances that might have otherwise been overlooked. As emphasized by Krach et al. ( 2009 ), the longstanding hypothesis that women differ from men in their mentalizing abilities underscores the importance of employing measurement tools capable of capturing diverse facets of mentalization when evaluating gender differences.

Limitations and future research

While the current studies advanced work on instruments assessing mentalization, at least two limitations should be considered. Firstly, our studies rely on convenience samples. Secondly, the estimation of test-retest reliability was conducted on an undergraduate sample with a notably higher percentage of females than males, potentially affecting the representation of gender in the results.

Future research evaluating the extent to which the three subfactors differentially predict outcomes in substantive domains is desirable. The use of a multidimensional instrument, such as the MentS, could help in clarifying this relevant issue and test intervention strategies focused on recovering the capacity to understand others and oneself in terms of internal mental states, always bearing in mind the various dimensions of mentalization. Furthermore, future research ought to persist in examining gender differences associated with mentalization across both normative and clinical populations, encompassing not only adolescents and adults but also older individuals, a demographic that has received limited attention in previous studies.

Data Availability

The data supporting this study’s findings are available from the corresponding author, Marina Cosenza, upon reasonable request. The data are not publicly available due to privacy or ethical restrictions.

Hypomentalizing and hypermentalizing represent distinct impairments within RF. Hypomentalizing is characterized by an inability to engage with intricate models of one’s own mind or others’. In contrast, hypermentalization involves continuous attempts to mentalize without effectively integrating cognition and emotions (Fonagy et al., 2016 ).

Ahmadian, Z., & Ghamarani, A. (2021). Reliability and validity of Persian version of mentalization scale in university students. Journal of Fundamentals of Mental Health

Allen, J. G., Fonagy, P., & Bateman, A. W. (2008). Mentalizing in clinical practice. Washington, DC: American Psychiatric Press.

Asgarizadeh, A., Vahidi, E., Seyed Mousavi, P. S., Bagherzanjani, A., & Ghanbari, S. (2023). Mentalization scale (MentS): Validity and reliability of the Iranian version in a sample of nonclinical adults. Brain and Behavior , 13 (8), e3114. https://doi.org/10.1002/brb3.3114 .

Article   PubMed   PubMed Central   Google Scholar  

Bateman, A., & Fonagy, P. (2004). Psychotherapy for borderline personality disorder: Mentalization based treatment . Oxford University Press.

Beaton, D. E., Bombardier, C., Guillemin, F., & Ferraz, M. B. (2000). Guidelines for the process of cross-cultural adaptation of self-report measures. Spine , 25 (24), 3186–3191. https://doi.org/10.1097/00007632-200012150-00014 .

Article   PubMed   Google Scholar  

Bentler, P. M. (2008). EQS structural equation modeling software . Multivariate Software.

Berleković, V., & Dimitrijević, A. (2020). Attachment and mentalization in war veterans with and without post-traumatic stress disorder. In A. Hamburger (Ed.), Trauma, Trust, and memory (pp. 151–159). Routledge.

Berry, M. D., & Berry, P. D. (2014). Mentalization-based therapy for sexual addiction: Foundations for a clinical model. Sexual and Relationship Therapy, 29 (2), 245–260. https://doi.org/10.1080/14681994.2013.856516

Bhola, P., & Mehrotra, K. (2021). Associations between countertransference reactions towards patients with borderline personality disorder and therapist experience levels and mentalization ability. Trends in Psychiatry and Psychotherapy , 43 (2), 116–125. https://doi.org/10.47626/2237-6089-2020-0025 .

Bion, W. R. (1962). The psycho-analytic study of thinking. International Journal of Psychoanalysis , 43 (4–5), 306–310. https://doi.org/10.1002/j.2167-4086.2013.00030.x .

Bizzi, F., Riva, A., Borelli, J. L., Charpentier-Mora, S., Bomba, M., Cavanna, D., & Nacinovich, R. (2022). The Italian version of the reflective functioning questionnaire: Validity within a sample of adolescents and associations with psychological problems and alexithymia. Journal of Clinical Psychology , 78 (4), 503–516. https://doi.org/10.1002/jclp.23218 .

Brattland, H., Holgersen, K. H., Vogel, P. A., Anderson, T., & Ryum, T. (2022). An apprenticeship model in the training of psychotherapy students. Study protocol for a randomized controlled trial and qualitative investigation. PloS One , 17 (8), e0272164. https://doi.org/10.1371/journal.pone.0272164 .

Chevalier, V., Simard, V., & Achim, J. (2023). Meta-analyses of the associations of mentalization and proxy variables with anxiety and internalizing problems. Journal of Anxiety Disorders . https://doi.org/10.1016/j.janxdis.2023.102694 . 102694.

Ciccarelli, M., Cosenza, M., Nigro, G., Griffiths, M., & D’Olimpio, F. (2022a). Gaming and gambling in adolescence: The role of personality, reflective functioning, time perspective and dissociation. International Gambling Studies , 22 , 161–179. https://doi.org/10.1080/14459795.2021.1985583 .

Article   Google Scholar  

Ciccarelli, M., Nigro, G., D’Olimpio, F., Griffiths, M. D., & Cosenza, M. (2021). Mentalizing failures, emotional dysregulation, and cognitive distortions among adolescent problem gamblers. Journal of Gambling Studies , 37 (1), 283–298. https://doi.org/10.1007/s10899-020-09967-w .

Ciccarelli, M., Nigro, G., D’Olimpio, F., Griffiths, M. D., Sacco, M., Pizzini, B., & Cosenza, M. (2022b). The associations between loneliness, anxiety, and problematic gaming behavior during the COVID-19 pandemic: The mediating role of mentalization. Mediterranean Journal of Clinical Psychology , 10 (1), 1–21. https://doi.org/10.13129/2282-1619/mjcp-3257 .

Cosenza, M., Ciccarelli, M., & Nigro, G. (2019). The steamy mirror of adolescent gamblers: Mentalization, impulsivity, and time horizon. Addictive Behaviors , 89 , 156–162. https://doi.org/10.1016/j.addbeh.2018.10.002 .

Dimitrijević, A., Hanak, N., Dimitrijević, A., A., & Jolić Marjanović, Z. (2018). The mentalization scale (MentS): A self-report measure for the Assessment of Mentalizing Capacity. Journal of Personality Assessment , 100 (3), 268–280. https://doi.org/10.1080/00223891.2017.1310730 .

Fekete, K., Török, E., Kelemen, O., Makkos, Z., Csigó, K., & Kéri, S. (2019). A mentalizáció dimenziói pszichotikus zavarokban. Neuropsychopharmacologia Hungarica , 21 (1), 5–11.

PubMed   Google Scholar  

Field, A. (2013). Discovering statistics using IBM SPSS statistics (4th edition). Sage.

Fisher, R., & Fontaine, J. R. J. (2012). Methods for investigating structural equivalence. In D. Matsumo, & F. J. R. Van de Vijver (Eds.), Cross-cultural research methods in psychology (Vol. 215, p. 179). Cambridge University Press.

Fonagy, P., Bateman, A., & Luyten, P. (2012). Introduction and overview. In A. Bateman, & P. Fonagy (Eds.), Handbook of mentalizing in mental health practice (pp. 3–41). American Psychiatric Publishing Inc.

Fonagy, P., Luyten, P., Moulton-Perkins, A., Lee, Y. W., Warren, F., Howard, S., Ghinai, R., Fearon, P., & Lowyck, B. (2016). Development and validation of a self-report measure of Mentalizing: The reflective functioning questionnaire. PloS One , 11 (7), e0158678. https://doi.org/10.1371/journal.pone.0158678 .

Fonagy, P., Target, M., Steele, H., & Steele, M. (1998). Reflective Functioning Scale manual. Unpublished manuscript, London, England.

Francoeur, A., Lecomte, T., Daigneault, I., Brassard, A., Lecours, V., & Hache-Labelle, C. (2020). Social Cognition as Mediator of romantic Breakup Adjustment in Young adults who experienced Childhood Maltreatment. Journal of Aggression Maltreatment & Trauma , 29 (9), 1125–1142. https://doi.org/10.1080/10926771.2019.1603177 .

Gervinskaitė-Paulaitienė, L., Byrne, G., & Barkauskienė, R. (2023). Mentalization-based parenting program for child Maltreatment Prevention: A pre–post study of 12-Week Lighthouse Group Program. Children , 10 (6), 1047. https://doi.org/10.3390/children10061047 .

Hausberg, M. C., Schulz, H., Piegler, T., Happach, C. G., Klöpper, M., Brütt, A. L., Sammet, I., & Andreas, S. (2012). Is a self-rated instrument appropriate to assess mentalization in patients with mental disorders? Development and first validation of the Mentalization Questionnaire (MZQ). Psychotherapy Research, 22 , 699–709. https://doi.org/10.1080/10503307.2012.709325

Horváth, Z., Demetrovics, O., Paksi, B., Unoka, Z., & Demetrovics, Z. (2023). The Reflective Functioning Questionnaire-Revised-7 (RFQ-R-7): A new measurement model assessing hypomentalization. PloS one , 18 (2), e0282000. https://doi.org/10.1371/journal.pone.0282000 .

Hüwe, L., Laser, L., & Andreas, S. (2023). Observer-based and computerized measures of the patient’s mentalization in psychotherapy: A scoping review. Psychotherapy Research . https://doi.org/10.1080/10503307.2023.2226812

Innamorati, M., Imperatori, C., Harnic, D., Erbuto, D., Patitucci, E., Janiri, L., Lamis, D. A., Pompili, M., Tamburello, S., & Fabbricatore, M. (2017). Emotion regulation and mentalization in people at risk for Food Addiction. Behavioral Medicine , 43 (1), 21–30. https://doi.org/10.1080/08964289.2015.1036831 .

Jańczak, M. (2021). Polish adaptation and validation of the mentalization scale (MentS)—A self-report measure of mentalizing. Psychiatria Polska , 55 (6), 1257–1274. https://doi.org/10.12740/PP/125383 .

Johnson, B. N., Kivity, Y., Rosenstein, L. K., LeBreton, J. M., & Levy, K. N. (2022). The association between mentalizing and psychopathology: A meta-analysis of the reading the mind in the eyes task across psychiatric disorders. Clinical Psychology: Science and Practice , 29 (4), 423–439. https://doi.org/10.1037/cps0000105 .

Krach, S., Blümel, I., Marjoram, D., Lataster, T., Krabbendam, L., Weber, J., & Kircher, T. (2009). Are women better mindreaders? Sex differences in neural correlates of mentalizing detected with functional MRI. BMC Neuroscience , 10 (1), 1–11. https://doi.org/10.1186/1471-2202-10-9 .

Lecointe, P., Bernoussi, A., Masson, J., & Schauder, S. (2016). La mentalisation affective de la personnalité limite addictive: Une revue de la littérature [Affective mentalizing in addictive Borderline personality: A literature review]. L’Encephale , 42 (5), 458–462. https://doi.org/10.1016/j.encep.2016.02.001 .

Li, E. T., Carracher, E., & Bird, T. (2020). Linking childhood emotional abuse and adult depressive symptoms: The role of mentalizing incapacity. Child Abuse & Neglect , 99 , 104253. https://doi.org/10.1016/j.chiabu.2019.104253 .

Lindberg, A., Fernie, B. A., & Spada, M. M. (2011). Metacognitions in problem gambling. Journal of Gambling Studies , 27 (1), 73–81. https://doi.org/10.1007/s10899-010-9193-1 .

Luyten, P., Campbell, C., Allison, E., & Fonagy, P. (2020). The Mentalizing Approach to Psychopathology: State of the art and future directions. Annual Review of Clinical Psychology , 16 , 297–325. https://doi.org/10.1146/annurev-clinpsy-071919-015355 .

Luyten, P., Fonagy, P., Lemma, A., & Target, M. (2012). Depression. In A. Bateman, & P. Fonagy (Eds.), Handbook of mentalizing in mental health practice (pp. 385–417). American Psychiatric Association.

MacCallum, R. C. (1990). The need for alternative measures of fit in Covariance structure modeling. Multivariate Behavioral Research , 25 (2), 157–162. https://doi.org/10.1207/s15327906mbr2502_2 .

Marsh, H. W., Balla, J. R., & McDonald, R. P. (1988). Goodness-of-fit indexes in confirmatory factor analysis: The effect of sample size. Psychological Bulletin , 103 (3), 391–410. https://doi.org/10.1037/0033-2909.103.3.391 .

Marty, P. (1991). Mentalisation et psychosomatique . Laboratoire Delagrange.

Matsuba, Y., Haraguchi, Y., Iwasaki, M., Otsuki, T., & Katsuragawa, Y. (2022). Development of the Japanese version of mentalization scale (MentS-J) and examination of its reliability and validity. Japanese Journal of Developmental Psychology . https://doi.org/10.11201/jjdp.33.137

McDonald, R. P. (1999). Test theory: A unified approach . Lawrence Erlbaum.

Möller, C., Karlgren, L., Sandell, A., Falkenström, F., & Philips, B. (2017). Mentalization-based therapy adherence and competence stimulates in-session mentalization in psychotherapy for borderline personality disorder with co-morbid substance dependence. Psychotherapy Research , 27 (6), 749–765. https://doi.org/10.1080/10503307.2016.1158433 .

Müller, S., Wendt, L. P., Spitzer, C., Masuhr, O., Back, S. N., & Zimmermann, J. (2022). A critical evaluation of the reflective functioning questionnaire (RFQ). Journal of Personality Assessment , 104 (5), 613–627. https://doi.org/10.1080/00223891.2021.1981346 .

Müller, S., Wendt, L. P., & Zimmermann, J. (2023). Development and validation of the Certainty about Mental States Questionnaire (CAMSQ): A self-report measure of mentalizing oneself and others. Assessment , 30 (3), 651–674. https://doi.org/10.1177/10731911211061280 .

Nigro, G., Matarazzo, O., Ciccarelli, M., D’Olimpio, F., & Cosenza, M. (2019). To chase or not to chase: A study on the role of mentalization and alcohol consumption in chasing behavior. Journal of Behavioral Addictions , 8 (4), 743–753. https://doi.org/10.1556/2006.8.2019.67 .

O’Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer’s MAP test. Behavior Research Methods Instruments & Computers , 32 (3), 396–402. https://doi.org/10.3758/bf03200807 .

Pedersen, S. H., Poulsen, S., & Lunn, S. (2015). Eating disorders and Mentalization: High reflective functioning in patients with Bulimia Nervosa. Journal of the American Psychoanalytic Association , 63 (4), 671–694. https://doi.org/10.1177/0003065115602440 .

Perroud, N., Badoud, D., Weibel, S., Nicastro, R., Hasler, R., Küng, A. L., Luyten, P., Fonagy, P., Dayer, A., Aubry, J. M., Prada, P., & Debbané, M. (2017). Mentalization in adults with attention deficit hyperactivity disorder: Comparison with controls and patients with borderline personality disorder. Psychiatry Research , 256 , 334–341. https://doi.org/10.1016/j.psychres.2017.06.087 .

Ponti, L., Stefanini, M. C., Gori, S., & Smorti, M. (2019). The assessment of mentalizing ability in adolescents: The Italian adaptation of the mentalization questionnaire (MZQ). TPM- Testing, Psychometrics . Methodology in Applied Psychology , 26 (1), 29–38. https://doi.org/10.4473/TPM26.1.2 .

Richter, F., Steinmair, D., & Löffler-Stastka, H. (2021). Construct validity of the mentalization scale (MentS) within a mixed psychiatric sample. Frontiers in Psychology , 12 , 608214. https://doi.org/10.3389/fpsyg.2021.608214 .

Skårderud, F. (2007a). Eating one’s words, part I: ‘Concretised metaphors’ and reflective function in anorexia nervosa–an interview study. European Eating Disorders Review., 15 (3), 163–174. https://doi.org/10.1002/erv.777

Skårderud, F. (2007b). Eating one's words, part II: The embodied mind and reflective function in anorexia nervosa–theory. European Eating Disorders, 15 (4), 243–252. https://doi.org/10.1002/erv.778

Spada, M. M., & Roarty, A. (2015). The relative contribution of metacognitions and attentional control to the severity of gambling in problem gamblers. Addictive Behaviors Reports , 1 , 7–11. https://doi.org/10.1016/j.abrep.2015.02.001 .

Suchman, N. E., DeCoste, C., Borelli, J. L., & McMahon, T. J. (2018). Does improvement in maternal attachment representations predict greater maternal sensitivity, child attachment security and lower rates of relapse to substance use? A second test of mothering from the Inside out treatment mechanisms. Journal of Substance Abuse Treatment , 85 , 21–30. https://doi.org/10.1016/j.jsat.2017.11.006 .

Surim, L., & Munhee, L. (2018). Validation of the Korean Version of the mentalization scale. Counseling Studies , 19 (5), 117–135. https://doi.org/10.15703/kjc.19.5.201810.117 .

Taubner, S., Hörz, S., Fischer-Kern, M., Doering, S., Buchheim, A., & Zimmermann, J. (2013). Internal structure of the reflective functioning scale. Psychological Assessment , 25 (1), 127–135. https://doi.org/10.1037/a0029138 .

Törenli Kaya, Z., Alpay, E. H., Türkkal Yenigüç, Ş., & Özçürümez Bilgili, G. (2023). Validity and reliability of the Turkish version of the mentalization scale (MentS). Zihinselleştirme Ölçeği’nin Türkçe Çevirisinin Geçerlik ve Güvenirlik Çalışması. Turk Psikiyatri Dergisi = Turkish Journal of Psychiatry , 34 (2), 118–124. https://doi.org/10.5080/u25692 .

Wen, Y., Fang, W., Wang, Y., Du, J., Dong, Y., Zu, X., & Wang, K. (2022). Reliability and validity of the Chinese version of the mentalization scale in the general population and patients with schizophrenia: A multicenter study in China. Current Psychology . https://doi.org/10.1007/s12144-022-04093-9

Download references

Open access funding provided by Università degli Studi della Campania Luigi Vanvitelli within the CRUI-CARE Agreement.

Author information

Authors and affiliations.

Department of Psychology, University of Campania Luigi Vanvitelli, Viale Ellittico, 31, 81100, Caserta, Italy

Marina Cosenza, Mariagiulia Sacco, Francesca D’Olimpio, Alda Troncone, Maria Ciccarelli & Giovanna Nigro

Giustino Fortunato Telematic University, Benevento, Italy

Barbara Pizzini

International Psychoanalytic University, IPU, Stromstr. 3, 10555, Berlin, Germany

Giustino Fortunato University, Viale Raffaele Delcogliano, 12, 82100, Benevento, Italy

Aleksandar Dimitrijević

You can also search for this author in PubMed   Google Scholar

Contributions

MCo and GN: Conceptualization, Methodology. AD: Literature searches.AD, MCo, and GN: Writing, Review, and Editing. BP: summary of previous research studies. MCi, AT and FDO: Data curation, Investigation. GN: Statistical analyses and Supervision.

Corresponding author

Correspondence to Marina Cosenza .

Ethics declarations

Competing interests.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

 Ethics approval

Approval was obtained from the ethics committee of University of Campania “Luigi Vanvitelli”. The procedures used in this study adhere to the tenets of the Declaration of Helsinki.

Consent to participate

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cosenza, M., Pizzini, B., Sacco, M. et al. Italian validation of the mentalization scale (MentS). Curr Psychol (2024). https://doi.org/10.1007/s12144-024-06071-9

Download citation

Accepted : 28 April 2024

Published : 08 May 2024

DOI : https://doi.org/10.1007/s12144-024-06071-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Mentalization
  • Mentalization assessment
  • Mentalization Scale (MentS)
  • Reflective functioning questionnaire
  • Italian validation
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Simple How To Write A Lab Report Psychology Example About Marketing

    example research report psychology

  2. Sample Lab Report Jane Doe Intro to Cognitive Psych Lab Report

    example research report psychology

  3. FREE 5+ Sample Research Paper Templates in PDF

    example research report psychology

  4. How to write a psychology lab report results

    example research report psychology

  5. (PDF) Teaching Psychological Report Writing: Content and Process

    example research report psychology

  6. Full Psychological Report.Sample

    example research report psychology

VIDEO

  1. How to write case report? Complete guide in Urdu

  2. Psychology lab report writing guide: the title page

  3. Presentations on Social Psychology and New Research

  4. Report Writing

  5. Psychology

  6. Format to write research Proposal ( How to write research Proposal) Amharic Tutorial

COMMENTS

  1. Writing a Research Report in American Psychological Association (APA

    Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many ...

  2. APA Sample Paper: Experimental Psychology

    Writing the Experimental Report: Methods, Results, and Discussion. Tables, Appendices, Footnotes and Endnotes. References and Sources for More Information. APA Sample Paper: Experimental Psychology. Style Guide Overview MLA Guide APA Guide Chicago Guide OWL Exercises. Purdue OWL. Subject-Specific Writing.

  3. Lab Report Format: Step-by-Step Guide & Examples

    In psychology, a lab report outlines a study's objectives, methods, results, discussion, and conclusions, ensuring clarity and adherence to APA (or relevant) formatting guidelines. A typical lab report would include the following sections: title, abstract, introduction, method, results, and discussion.

  4. Psychological Report Writing

    In research report there are usually six sub-sections: (1) Abstract: This is always written last because it is a very brief summary: Include a one sentence summary, giving the topic to be studied. This may include the hypothesis and some brief theoretical background research, for example the name of the researchers whose work you have ...

  5. PDF B.S. Research Paper Example (Empirical Research Paper)

    B.S. Research Paper Example (Empirical Research Paper) This is an example of a research paper that was written in fulfillment of the B.S. research paper requirement. It uses APA style for all aspects except the cover sheet (this page; the cover sheet is required by the department). It describes research that the author was involved in while ...

  6. PDF Guide to Writing a Psychology Research Paper

    Component 1: The Title Page • On the right side of the header, type the first 2-3 words of your full title followed by the page number. This header will appear on every page of you report. • At the top of the page, type flush left the words "Running head:" followed by an abbreviation of your title in all caps.

  7. Writing a Research Report in American Psychological Association (APA

    Sample APA-Style Research Report. Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. ... This is worth considering because people who volunteer to participate in psychological research have been ...

  8. PDF Reporting Qualitative Research in Psychology

    oping comprehensive reports that will support their review. Guidance is provided for how to best present qualitative research, with rationales and illustrations. The reporting standards for qualitative meta-analyses, which are integrative analy-ses of findings from across primary qualitative research, are presented in Chapter 8.

  9. PDF Writing Your Psychology Research Paper

    My students tell me that writing research papers is hard for at least two reasons. First, a blank document is overwhelming—a 10-page paper feels unreachable, especially when the first page is coming along so slowly. Second, writing well—clear, coherent, and thoughtful prose—does not come naturally.

  10. PDF Sample Paper: One-Experiment Paper

    Sample One-Experiment Paper (continued) emotional detection than young adults, or older adults could show a greater facilitation than. young adults only for the detection of positive information. The results lent some support to the. first two alternatives, but no evidence was found to support the third alternative.

  11. How to Write a Psychology Research Paper

    Remember to follow APA format as you write your paper and include in-text citations for any materials you reference. Make sure to cite any information in the body of your paper in your reference section at the end of your document. Writing a psychology research paper can be intimidating at first, but breaking the process into a series of ...

  12. PDF B.S. Research Paper Example (Literature Review)

    Talwar and Lee (2002) wanted to examine verbal and nonverbal behaviors of lying and. truth-telling children aged three- to seven-years-old. They hypothesized that young children were. more likely to incriminate themselves verbally. Talwar and Lee used a resistant temptation.

  13. PDF GUIDE TO WRITING RESEARCH REPORTS

    A useful rule of thumb is to try to write four concise sentences describing: (1) Why you did it, (2) What you did, (3) What results you found and (4) What you concluded. Write the Abstract after you have written the rest of the report. You may find it difficult to write a short abstract in one go.

  14. PDF RESEARCH REPORT (PSYCHOLOGY)

    report. download an APA 7 Sample 3. Establish what the research question for the study is, theaims, and what the hypothesis is. Write down a clear statement question,the aims and the hypothesis of the study you are reporting on to keep you focused on and on topic while you research and draft your report. The aims are the goals of the study,

  15. PDF Reporting Quantitative Research in Psychology

    In many ways, the report of a psychology research project contains a recipe. Without accurate descriptions of the ingredients (the manipulations and measures) and the ... This is where the sample, measures, and research design are detailed. Chapter 4 presents the JARS standards for reporting basic research designs.

  16. 11.2 Writing a Research Report in American Psychological Association

    In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as "cute." They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal of Personality and Social Psychology.

  17. Research Paper Structure

    A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. 1 Many will also contain Figures and Tables and some will have an Appendix or Appendices. These sections are detailed as follows (for a more in ...

  18. PDF Reports: Psychology example

    Reports: Psychology example Reports in the discipline of Psychology usually present on empirical research. They consist of clear sections that reflect stages in the research process. The different sections in the report usually appear in a sequence of stages: • Title: informs the reader about the study

  19. Reporting Research Results in APA Style

    The results section of a quantitative research paper is where you summarize your data and report the findings of any relevant statistical analyses. The APA manual provides rigorous guidelines for what to report in quantitative research papers in the fields of psychology, education, and other social sciences. ... Example: Reporting ...

  20. Free APA Journal Articles

    Recently published articles from subdisciplines of psychology covered by more than 90 APA Journals™ publications. For additional free resources (such as article summaries, podcasts, and more), please visit the Highlights in Psychological Research page. Browse and read free articles from APA Journals across the field of psychology, selected by ...

  21. 12.3 Expressing Your Results

    There are a few important APA style guidelines here. First, statistical results are always presented in the form of numerals rather than words and are usually rounded to two decimal places (e.g., "2.00" rather than "two" or "2"). They can be presented either in the narrative description of the results or parenthetically—much like ...

  22. PDF Experimental Psychology Practical Report

    In a psychology lab report, there is usually an abstract which functions to summarise the entire contents of the report. Sometimes your tutor will not expect you to write an abstract, as shown in this sample. Check this with your tutor. See Ch 2, Getting started on your lab report Sources cited to support the information. Introduction provides the

  23. Qualitative Psychology Sample articles

    Sample articles from the APA journal Qualitative Psychology. ... Report lists eight recommendations for scientists, policymakers, and others to meet the ongoing risk to health, well-being, and civic life ... Recommendations for Designing and Reviewing Qualitative Research in Psychology: Promoting Methodological Integrity (PDF, 166KB) February 2017

  24. Welcome to the Purdue Online Writing Lab

    The Online Writing Lab at Purdue University houses writing resources and instructional material, and we provide these as a free service of the Writing Lab at Purdue.

  25. Italian validation of the mentalization scale (MentS)

    The research aimed to assess the reliability, factor structure, and validity of the Italian adaptation of the Mentalization Scale (MentS), a 28-item self-report questionnaire that measures mentalization across three dimensions. The psychometric properties of the Italian version were examined in two studies with large samples of adults and adolescents. The first study (Study 1) aimed to ...