• Research article
  • Open access
  • Published: 30 April 2021

A scoping review of the literature featuring research ethics and research integrity cases

  • Anna Catharina Vieira Armond   ORCID: orcid.org/0000-0002-7121-5354 1 ,
  • Bert Gordijn 2 ,
  • Jonathan Lewis 2 ,
  • Mohammad Hosseini 2 ,
  • János Kristóf Bodnár 1 ,
  • Soren Holm 3 , 4 &
  • Péter Kakuk 5  

BMC Medical Ethics volume  22 , Article number:  50 ( 2021 ) Cite this article

13k Accesses

25 Citations

28 Altmetric

Metrics details

The areas of Research Ethics (RE) and Research Integrity (RI) are rapidly evolving. Cases of research misconduct, other transgressions related to RE and RI, and forms of ethically questionable behaviors have been frequently published. The objective of this scoping review was to collect RE and RI cases, analyze their main characteristics, and discuss how these cases are represented in the scientific literature.

The search included cases involving a violation of, or misbehavior, poor judgment, or detrimental research practice in relation to a normative framework. A search was conducted in PubMed, Web of Science, SCOPUS, JSTOR, Ovid, and Science Direct in March 2018, without language or date restriction. Data relating to the articles and the cases were extracted from case descriptions.

A total of 14,719 records were identified, and 388 items were included in the qualitative synthesis. The papers contained 500 case descriptions. After applying the eligibility criteria, 238 cases were included in the analysis. In the case analysis, fabrication and falsification were the most frequently tagged violations (44.9%). The non-adherence to pertinent laws and regulations, such as lack of informed consent and REC approval, was the second most frequently tagged violation (15.7%), followed by patient safety issues (11.1%) and plagiarism (6.9%). 80.8% of cases were from the Medical and Health Sciences, 11.5% from the Natural Sciences, 4.3% from Social Sciences, 2.1% from Engineering and Technology, and 1.3% from Humanities. Paper retraction was the most prevalent sanction (45.4%), followed by exclusion from funding applications (35.5%).

Conclusions

Case descriptions found in academic journals are dominated by discussions regarding prominent cases and are mainly published in the news section of journals. Our results show that there is an overrepresentation of biomedical research cases over other scientific fields compared to its proportion in scientific publications. The cases mostly involve fabrication, falsification, and patient safety issues. This finding could have a significant impact on the academic representation of misbehaviors. The predominance of fabrication and falsification cases might diverge the attention of the academic community from relevant but less visible violations, and from recently emerging forms of misbehaviors.

Peer Review reports

There has been an increase in academic interest in research ethics (RE) and research integrity (RI) over the past decade. This is due, among other reasons, to the changing research environment with new and complex technologies, increased pressure to publish, greater competition in grant applications, increased university-industry collaborative programs, and growth in international collaborations [ 1 ]. In addition, part of the academic interest in RE and RI is due to highly publicized cases of misconduct [ 2 ].

There is a growing body of published RE and RI cases, which may contribute to public attitudes regarding both science and scientists [ 3 ]. Different approaches have been used in order to analyze RE and RI cases. Studies focusing on ORI files (Office of Research Integrity) [ 2 ], retracted papers [ 4 ], quantitative surveys [ 5 ], data audits [ 6 ], and media coverage [ 3 ] have been conducted to understand the context, causes, and consequences of these cases.

Analyses of RE and RI cases often influence policies on responsible conduct of research [ 1 ]. Moreover, details about cases facilitate a broader understanding of issues related to RE and RI and can drive interventions to address them. Currently, there are no comprehensive studies that have collected and evaluated the RE and RI cases available in the academic literature. This review has been developed by members of the EnTIRE consortium to generate information on the cases that will be made available on the Embassy of Good Science platform ( www.embassy.science ). Two separate analyses have been conducted. The first analysis uses identified research articles to explore how the literature presents cases of RE and RI, in relation to the year of publication, country, article genre, and violation involved. The second analysis uses the cases extracted from the literature in order to characterize the cases and analyze them concerning the violations involved, sanctions, and field of science.

This scoping review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement and PRISMA Extension for Scoping Reviews (PRISMA-ScR). The full protocol was pre-registered and it is available at https://ec.europa.eu/research/participants/documents/downloadPublic?documentIds=080166e5bde92120&appId=PPGMS .

Eligibility

Articles with non-fictional case(s) involving a violation of, or misbehavior, poor judgment, or detrimental research practice in relation to a normative framework, were included. Cases unrelated to scientific activities, research institutions, academic or industrial research and publication were excluded. Articles that did not contain a substantial description of the case were also excluded.

A normative framework consists of explicit rules, formulated in laws, regulations, codes, and guidelines, as well as implicit rules, which structure local research practices and influence the application of explicitly formulated rules. Therefore, if a case involves a violation of, or misbehavior, poor judgment, or detrimental research practice in relation to a normative framework, then it does so on the basis of explicit and/or implicit rules governing RE and RI practice.

Search strategy

A search was conducted in PubMed, Web of Science, SCOPUS, JSTOR, Ovid, and Science Direct in March 2018, without any language or date restrictions. Two parallel searches were performed with two sets of medical subject heading (MeSH) terms, one for RE and another for RI. The parallel searches generated two sets of data thereby enabling us to analyze and further investigate the overlaps in, differences in, and evolution of, the representation of RE and RI cases in the academic literature. The terms used in the first search were: (("research ethics") AND (violation OR unethical OR misconduct)). The terms used in the parallel search were: (("research integrity") AND (violation OR unethical OR misconduct)). The search strategy’s validity was tested in a pilot search, in which different keyword combinations and search strings were used, and the abstracts of the first hundred hits in each database were read (Additional file 1 ).

After searching the databases with these two search strings, the titles and abstracts of extracted items were read by three contributors independently (ACVA, PK, and KB). Articles that could potentially meet the inclusion criteria were identified. After independent reading, the three contributors compared their results to determine which studies were to be included in the next stage. In case of a disagreement, items were reassessed in order to reach a consensus. Subsequently, qualified items were read in full.

Data extraction

Data extraction processes were divided by three assessors (ACVA, PK and KB). Each list of extracted data generated by one assessor was cross-checked by the other two. In case of any inconsistencies, the case was reassessed to reach a consensus. The following categories were employed to analyze the data of each extracted item (where available): (I) author(s); (II) title; (III) year of publication; (IV) country (according to the first author's affiliation); (V) article genre; (VI) year of the case; (VII) country in which the case took place; (VIII) institution(s) and person(s) involved; (IX) field of science (FOS-OECD classification)[ 7 ]; (X) types of violation (see below); (XI) case description; and (XII) consequences for persons or institutions involved in the case.

Two sets of data were created after the data extraction process. One set was used for the analysis of articles and their representation in the literature, and the other set was created for the analysis of cases. In the set for the analysis of articles, all eligible items, including duplicate cases (cases found in more than one paper, e.g. Hwang case, Baltimore case) were included. The aim was to understand the historical aspects of violations reported in the literature as well as the paper genre in which cases are described and discussed. For this set, the variables of the year of publication (III); country (IV); article genre (V); and types of violation (X) were analyzed.

For the analysis of cases, all duplicated cases and cases that did not contain enough information about particularities to differentiate them from others (e.g. names of the people or institutions involved, country, date) were excluded. In this set, prominent cases (i.e. those found in more than one paper) were listed only once, generating a set containing solely unique cases. These additional exclusion criteria were applied to avoid multiple representations of cases. For the analysis of cases, the variables: (VI) year of the case; (VII) country in which the case took place; (VIII) institution(s) and person(s) involved; (IX) field of science (FOS-OECD classification); (X) types of violation; (XI) case details; and (XII) consequences for persons or institutions involved in the case were considered.

Article genre classification

We used ten categories to capture the differences in genre. We included a case description in a “news” genre if a case was published in the news section of a scientific journal or newspaper. Although we have not developed a search strategy for newspaper articles, some of them (e.g. New York Times) are indexed in scientific databases such as Pubmed. The same method was used to allocate case descriptions to “editorial”, “commentary”, “misconduct notice”, “retraction notice”, “review”, “letter” or “book review”. We applied the “case analysis” genre if a case description included a normative analysis of the case. The “educational” genre was used when a case description was incorporated to illustrate RE and RI guidelines or institutional policies.

Categorization of violations

For the extraction process, we used the articles’ own terminology when describing violations/ethical issues involved in the event (e.g. plagiarism, falsification, ghost authorship, conflict of interest, etc.) to tag each article. In case the terminology was incompatible with the case description, other categories were added to the original terminology for the same case. Subsequently, the resulting list of terms was standardized using the list of major and minor misbehaviors developed by Bouter and colleagues [ 8 ]. This list consists of 60 items classified into four categories: Study design, data collection, reporting, and collaboration issues. (Additional file 2 ).

Systematic search

A total of 11,641 records were identified through the RE search and 3078 in the RI search. The results of the parallel searches were combined and the duplicates removed. The remaining 10,556 records were screened, and at this stage, 9750 items were excluded because they did not fulfill the inclusion criteria. 806 items were selected for full-text reading. Subsequently, 388 articles were included in the qualitative synthesis (Fig.  1 ).

figure 1

Flow diagram

Of the 388 articles, 157 were only identified via the RE search, 87 exclusively via the RI search, and 144 were identified via both search strategies. The eligible articles contained 500 case descriptions, which were used for the analysis of the publications articles analysis. 256 case descriptions discussed the same 50 cases. The Hwang case was the most frequently described case, discussed in 27 articles. Furthermore, the top 10 most described cases were found in 132 articles (Table 1 ).

For the analysis of cases, 206 (41.2% of the case descriptions) duplicates were excluded, and 56 (11.2%) cases were excluded for not providing enough information to distinguish them from other cases, resulting in 238 eligible cases.

Analysis of the articles

The categories used to classify the violations include those that pertain to the different kinds of scientific misconduct (falsification, fabrication, plagiarism), detrimental research practices (authorship issues, duplication, peer-review, errors in experimental design, and mentoring), and “other misconduct” (according to the definitions from the National Academies of Sciences and Medicine, [ 1 ]). Each case could involve more than one type of violation. The majority of cases presented more than one violation or ethical issue, with a mean of 1.56 violations per case. Figure  2 presents the frequency of each violation tagged to the articles. Falsification and fabrication were the most frequently tagged violations. The violations accounted respectively for 29.1% and 30.0% of the number of taggings (n = 780), and they were involved in 46.8% and 45.4% of the articles (n = 500 case descriptions). Problems with informed consent represented 9.1% of the number of taggings and 14% of the articles, followed by patient safety (6.7% and 10.4%) and plagiarism (5.4% and 8.4%). Detrimental research practices, such as authorship issues, duplication, peer-review, errors in experimental design, mentoring, and self-citation were mentioned cumulatively in 7.0% of the articles.

figure 2

Tagged violations from the article analysis

Analysis of the cases

Figure  3 presents the frequency and percentage of each violation found in the cases. Each case could include more than one item from the list. The 238 cases were tagged 305 times, with a mean of 1.28 items per case. Fabrication and falsification were the most frequently tagged violations (44.9%), involved in 57.7% of the cases (n = 238). The non-adherence to pertinent laws and regulations, such as lack of informed consent and REC approval, was the second most frequently tagged violation (15.7%) and involved in 20.2% of the cases. Patient safety issues were the third most frequently tagged violations (11.1%), involved in 14.3% of the cases, followed by plagiarism (6.9% and 8.8%). The list of major and minor misbehaviors [ 8 ] classifies the items into study design, data collection, reporting, and collaboration issues. Our results show that 56.0% of the tagged violations involved issues in reporting, 16.4% in data collection, 15.1% involved collaboration issues, and 12.5% in the study design. The items in the original list that were not listed in the results were not involved in any case collected.

figure 3

Major and minor misbehavior items from the analysis of cases

Article genre

The articles were mostly classified into “news” (33.0%), followed by “case analysis” (20.9%), “editorial” (12.1%), “commentary” (10.8%), “misconduct notice” (10.3%), “retraction notice” (6.4%), “letter” (3.6%), “educational paper” (1.3%), “review” (1%), and “book review” (0.3%) (Fig.  4 ). The articles classified into “news” and “case analysis” included predominantly prominent cases. Items classified into “news” often explored all the investigation findings step by step for the associated cases as the case progressed through investigations, and this might explain its high prevalence. The case analyses included mainly normative assessments of prominent cases. The misconduct and retraction notices included the largest number of unique cases, although a relatively large portion of the retraction and misconduct records could not be included because of insufficient case details. The articles classified into “editorial”, “commentary” and “letter” also included unique cases.

figure 4

Article genre of included articles

Article analysis

The dates of the eligible articles range from 1983 to 2018 with notable peaks between 1990 and 1996, most probably associated with the Gallo [ 9 ] and Imanishi-Kari cases [ 10 ], and around 2005 with the Hwang [ 11 ], Wakefield [ 12 ], and CNEP trial cases [ 13 ] (Fig.  5 ). The trend line shows an increase in the number of articles over the years.

figure 5

Frequency of articles according to the year of publication

Case analysis

The dates of included cases range from 1798 to 2016. Two cases occurred before 1910, one in 1798 and the other in 1845. Figure  6 shows the number of cases per year from 1910. An increase in the curve started in the early 1980s, reaching the highest frequency in 2004 with 13 cases.

figure 6

Frequency of cases per year

Geographical distribution

The first analysis concerned the authors’ affiliation and the corresponding author’s address. Where the article contained more than one country in the affiliation list, only the first author’s location was considered. Eighty-one articles were excluded because the authors’ affiliations were not available, and 307 articles were included in the analysis. The articles originated from 26 different countries (Additional file 3 ). Most of the articles emanated from the USA and the UK (61.9% and 14.3% of articles, respectively), followed by Canada (4.9%), Australia (3.3%), China (1.6%), Japan (1.6%), Korea (1.3%), and New Zealand (1.3%). Some of the most discussed cases occurred in the USA; the Imanishi-Kari, Gallo, and Schön cases [ 9 , 10 ]. Intensely discussed cases are also associated with Canada (Fisher/Poisson and Olivieri cases), the UK (Wakefield and CNEP trial cases), South Korea (Hwang case), and Japan (RIKEN case) [ 12 , 14 ]. In terms of percentages, North America and Europe stand out in the number of articles (Fig.  7 ).

figure 7

Percentage of articles and cases by continent

The case analysis involved the location where the case took place, taking into account the institutions involved in the case. For cases involving more than one country, all the countries were considered. Three cases were excluded from the analysis due to insufficient information. In the case analysis, 40 countries were involved in 235 different cases (Additional file 4 ). Our findings show that most of the reported cases occurred in the USA and the United Kingdom (59.6% and 9.8% of cases, respectively). In addition, a number of cases occurred in Canada (6.0%), Japan (5.5%), China (2.1%), and Germany (2.1%). In terms of percentages, North America and Europe stand out in the number of cases (Fig.  7 ). To enable comparison, we have additionally collected the number of published documents according to country distribution, available on SCImago Journal & Country Rank [ 16 ]. The numbers correspond to the documents published from 1996 to 2019. The USA occupies the first place in the number of documents, with 21.9%, followed by China (11.1%), UK (6.3%), Germany (5.5%), and Japan (4.9%).

Field of science

The cases were classified according to the field of science. Four cases (1.7%) could not be classified due to insufficient information. Where information was available, 80.8% of cases were from the Medical and Health Sciences, 11.5% from the Natural Sciences, 4.3% from Social Sciences, 2.1% from Engineering and Technology, and 1.3% from Humanities (Fig.  8 ). Additionally, we have retrieved the number of published documents according to scientific field distribution, available on SCImago [ 16 ]. Of the total number of scientific publications, 41.5% are related to natural sciences, 22% to engineering, 25.1% to health and medical sciences, 7.8% to social sciences, 1.9% to agricultural sciences, and 1.7% to the humanities.

figure 8

Field of science from the analysis of cases

This variable aimed to collect information on possible consequences and sanctions imposed by funding agencies, scientific journals and/or institutions. 97 cases could not be classified due to insufficient information. 141 cases were included. Each case could potentially include more than one outcome. Most of cases (45.4%) involved paper retraction, followed by exclusion from funding applications (35.5%). (Table 2 ).

RE and RI cases have been increasingly discussed publicly, affecting public attitudes towards scientists and raising awareness about ethical issues, violations, and their wider consequences [ 5 ]. Different approaches have been applied in order to quantify and address research misbehaviors [ 5 , 17 , 18 , 19 ]. However, most cases are investigated confidentially and the findings remain undisclosed even after the investigation [ 19 , 20 ]. Therefore, the study aimed to collect the RE and RI cases available in the scientific literature, understand how the cases are discussed, and identify the potential of case descriptions to raise awareness on RE and RI.

We collected and analyzed 500 detailed case descriptions from 388 articles and our results show that they mostly relate to extensively discussed and notorious cases. Approximately half of all included cases was mentioned in at least two different articles, and the top ten most commonly mentioned cases were discussed in 132 articles.

The prominence of certain cases in the literature, based on the number of duplicated cases we found (e.g. Hwang case), can be explained by the type of article in which cases are discussed and the type of violation involved in the case. In the article genre analysis, 33% of the cases were described in the news section of scientific publications. Our findings show that almost all article genres discuss those cases that are new and in vogue. Once the case appears in the public domain, it is intensely discussed in the media and by scientists, and some prominent cases have been discussed for more than 20 years (Table 1 ). Misconduct and retraction notices were exceptions in the article genre analysis, as they presented mostly unique cases. The misconduct notices were mainly found on the NIH repository, which is indexed in the searched databases. Some federal funding agencies like NIH usually publicize investigation findings associated with the research they fund. The results derived from the NIH repository also explains the large proportion of articles from the US (61.9%). However, in some cases, only a few details are provided about the case. For cases that have not received federal funding and have not been reported to federal authorities, the investigation is conducted by local institutions. In such instances, the reporting of findings depends on each institution’s policy and willingness to disclose information [ 21 ]. The other exception involves retraction notices. Despite the existence of ethical guidelines [ 22 ], there is no uniform and a common approach to how a journal should report a retraction. The Retraction Watch website suggests two lists of information that should be included in a retraction notice to satisfy the minimum and optimum requirements [ 22 , 23 ]. As well as disclosing the reason for the retraction and information regarding the retraction process, optimal notices should include: (I) the date when the journal was first alerted to potential problems; (II) details regarding institutional investigations and associated outcomes; (III) the effects on other papers published by the same authors; (IV) statements about more recent replications only if and when these have been validated by a third party; (V) details regarding the journal’s sanctions; and (VI) details regarding any lawsuits that have been filed regarding the case. The lack of transparency and information in retraction notices was also noted in studies that collected and evaluated retractions [ 24 ]. According to Resnik and Dinse [ 25 ], retractions notices related to cases of misconduct tend to avoid naming the specific violation involved in the case. This study found that only 32.8% of the notices identify the actual problem, such as fabrication, falsification, and plagiarism, and 58.8% reported the case as replication failure, loss of data, or error. Potential explanations for euphemisms and vague claims in retraction notices authored by editors could pertain to the possibility of legal actions from the authors, honest or self-reported errors, and lack of resources to conduct thorough investigations. In addition, the lack of transparency can also be explained by the conflicts of interests of the article’s author(s), since the notices are often written by the authors of the retracted article.

The analysis of violations/ethical issues shows the dominance of fabrication and falsification cases and explains the high prevalence of prominent cases. Non-adherence to laws and regulations (REC approval, informed consent, and data protection) was the second most prevalent issue, followed by patient safety, plagiarism, and conflicts of interest. The prevalence of the five most tagged violations in the case analysis was higher than the prevalence found in the analysis of articles that involved the same violations. The only exceptions are fabrication and falsification cases, which represented 45% of the tagged violations in the analysis of cases, and 59.1% in the article analysis. This disproportion shows a predilection for the publication of discussions related to fabrication and falsification when compared to other serious violations. Complex cases involving these types of violations make good headlines and this follows a custom pattern of writing about cases that catch the public and media’s attention [ 26 ]. The way cases of RE and RI violations are explored in the literature gives a sense that only a few scientists are “the bad apples” and they are usually discovered, investigated, and sanctioned accordingly. This implies that the integrity of science, in general, remains relatively untouched by these violations. However, studies on misconduct determinants show that scientific misconduct is a systemic problem, which involves not only individual factors, but structural and institutional factors as well, and that a combined effort is necessary to change this scenario [ 27 , 28 ].

Analysis of cases

A notable increase in RE and RI cases occurred in the 1990s, with a gradual increase until approximately 2006. This result is in agreement with studies that evaluated paper retractions [ 24 , 29 ]. Although our study did not focus only on retractions, the trend is similar. This increase in cases should not be attributed only to the increase in the number of publications, since studies that evaluated retractions show that the percentage of retraction due to fraud has increased almost ten times since 1975, compared to the total number of articles. Our results also show a gradual reduction in the number of cases from 2011 and a greater drop in 2015. However, this reduction should be considered cautiously because many investigations take years to complete and have their findings disclosed. ORI has shown that from 2001 to 2010 the investigation of their cases took an average of 20.48 months with a maximum investigation time of more than 9 years [ 24 ].

The countries from which most cases were reported were the USA (59.6%), the UK (9.8%), Canada (6.0%), Japan (5.5%), and China (2.1%). When analyzed by continent, the highest percentage of cases took place in North America, followed by Europe, Asia, Oceania, Latin America, and Africa. The predominance of cases from the USA is predictable, since the country publishes more scientific articles than any other country, with 21.8% of the total documents, according to SCImago [ 16 ]. However, the same interpretation does not apply to China, which occupies the second position in the ranking, with 11.2%. These differences in the geographical distribution were also found in a study that collected published research on research integrity [ 30 ]. The results found by Aubert Bonn and Pinxten (2019) show that studies in the United States accounted for more than half of the sample collected, and although China is one of the leaders in scientific publications, it represented only 0.7% of the sample. Our findings can also be explained by the search strategy that included only keywords in English. Since the majority of RE and RI cases are investigated and have their findings locally disclosed, the employment of English keywords and terms in the search strategy is a limitation. Moreover, our findings do not allow us to draw inferences regarding the incidence or prevalence of misconduct around the world. Instead, it shows where there is a culture of publicly disclosing information and openly discussing RE and RI cases in English documents.

Scientific field analysis

The results show that 80.8% of reported cases occurred in the medical and health sciences whilst only 1.3% occurred in the humanities. This disciplinary difference has also been observed in studies on research integrity climates. A study conducted by Haven and colleagues, [ 28 ] associated seven subscales of research climate with the disciplinary field. The subscales included: (1) Responsible Conduct of Research (RCR) resources, (2) regulatory quality, (3) integrity norms, (4) integrity socialization, (5) supervisor/supervisee relations, (6) (lack of) integrity inhibitors, and (7) expectations. The results, based on the seven subscale scores, show that researchers from the humanities and social sciences have the lowest perception of the RI climate. By contrast, the natural sciences expressed the highest perception of the RI climate, followed by the biomedical sciences. There are also significant differences in the depth and extent of the regulatory environments of different disciplines (e.g. the existence of laws, codes of conduct, policies, relevant ethics committees, or authorities). These findings corroborate our results, as those areas of science most familiar with RI tend to explore the subject further, and, consequently, are more likely to publish case details. Although the volume of published research in each research area also influences the number of cases, the predominance of medical and health sciences cases is not aligned with the trends regarding the volume of published research. According to SCImago Journal & Country Rank [ 16 ], natural sciences occupy the first place in the number of publications (41,5%), followed by the medical and health sciences (25,1%), engineering (22%), social sciences (7,8%), and the humanities (1,7%). Moreover, biomedical journals are overrepresented in the top scientific journals by IF ranking, and these journals usually have clear policies for research misconduct. High-impact journals are more likely to have higher visibility and scrutiny, and consequently, more likely to have been the subject of misconduct investigations. Additionally, the most well-known general medical journals, including NEJM, The Lancet, and the BMJ, employ journalists to write their news sections. Since these journals have the resources to produce extensive news sections, it is, therefore, more likely that medical cases will be discussed.

Violations analysis

In the analysis of violations, the cases were categorized into major and minor misbehaviors. Most cases involved data fabrication and falsification, followed by cases involving non-adherence to laws and regulations, patient safety, plagiarism, and conflicts of interest. When classified by categories, 12.5% of the tagged violations involved issues in the study design, 16.4% in data collection, 56.0% in reporting, and 15.1% involved collaboration issues. Approximately 80% of the tagged violations involved serious research misbehaviors, based on the ranking of research misbehaviors proposed by Bouter and colleagues. However, as demonstrated in a meta-analysis by Fanelli (2009), most self-declared cases involve questionable research practices. In the meta-analysis, 33.7% of scientists admitted questionable research practices, and 72% admitted when asked about the behavior of colleagues. This finding contrasts with an admission rate of 1.97% and 14.12% for cases involving fabrication, falsification, and plagiarism. However, Fanelli’s meta-analysis does not include data about research misbehaviors in its wider sense but focuses on behaviors that bias research results (i.e. fabrication and falsification, intentional non-publication of results, biased methodology, misleading reporting). In our study, the majority of cases involved FFP (66.4%). Overrepresentation of some types of violations, and underrepresentation of others, might lead to misguided efforts, as cases that receive intense publicity eventually influence policies relating to scientific misconduct and RI [ 20 ].

Sanctions analysis

The five most prevalent outcomes were paper retraction, followed by exclusion from funding applications, exclusion from service or position, dismissal and suspension, and paper correction. This result is similar to that found by Redman and Merz [ 31 ], who collected data from misconduct cases provided by the ORI. Moreover, their results show that fabrication and falsification cases are 8.8 times more likely than others to receive funding exclusions. Such cases also received, on average, 0.6 more sanctions per case. Punishments for misconduct remain under discussion, ranging from the criminalization of more serious forms of misconduct [ 32 ] to social punishments, such as those recently introduced by China [ 33 ]. The most common sanction identified by our analysis—paper retraction—is consistent with the most prevalent types of violation, that is, falsification and fabrication.

Publicizing scientific misconduct

The lack of publicly available summaries of misconduct investigations makes it difficult to share experiences and evaluate the effectiveness of policies and training programs. Publicizing scientific misconduct can have serious consequences and creates a stigma around those involved in the case. For instance, publicized allegations can damage the reputation of the accused even when they are later exonerated [ 21 ]. Thus, for published cases, it is the responsibility of the authors and editors to determine whether the name(s) of those involved should be disclosed. On the one hand, it is envisaged that disclosing the name(s) of those involved will encourage others in the community to foster good standards. On the other hand, it is suggested that someone who has made a mistake should have the right to a chance to defend his/her reputation. Regardless of whether a person's name is left out or disclosed, case reports have an important educational function and can help guide RE- and RI-related policies [ 34 ]. A recent paper published by Gunsalus [ 35 ] proposes a three-part approach to strengthen transparency in misconduct investigations. The first part consists of a checklist [ 36 ]. The second suggests that an external peer reviewer should be involved in investigative reporting. The third part calls for the publication of the peer reviewer’s findings.

Limitations

One of the possible limitations of our study may be our search strategy. Although we have conducted pilot searches and sensitivity tests to reach the most feasible and precise search strategy, we cannot exclude the possibility of having missed important cases. Furthermore, the use of English keywords was another limitation of our search. Since most investigations are performed locally and published in local repositories, our search only allowed us to access cases from English-speaking countries or discussed in academic publications written in English. Additionally, it is important to note that the published cases are not representative of all instances of misconduct, since most of them are never discovered, and when discovered, not all are fully investigated or have their findings published. It is also important to note that the lack of information from the extracted case descriptions is a limitation that affects the interpretation of our results. In our review, only 25 retraction notices contained sufficient information that allowed us to include them in our analysis in conformance with the inclusion criteria. Although our search strategy was not focused specifically on retraction and misconduct notices, we believe that if sufficiently detailed information was available in such notices, the search strategy would have identified them.

Case descriptions found in academic journals are dominated by discussions regarding prominent cases and are mainly published in the news section of journals. Our results show that there is an overrepresentation of biomedical research cases over other scientific fields when compared with the volume of publications produced by each field. Moreover, published cases mostly involve fabrication, falsification, and patient safety issues. This finding could have a significant impact on the academic representation of ethical issues for RE and RI. The predominance of fabrication and falsification cases might diverge the attention of the academic community from relevant but less visible violations and ethical issues, and recently emerging forms of misbehaviors.

Availability of data and materials

This review has been developed by members of the EnTIRE project in order to generate information on the cases that will be made available on the Embassy of Good Science platform ( www.embassy.science ). The dataset supporting the conclusions of this article is available in the Open Science Framework (OSF) repository in https://osf.io/3xatj/?view_only=313a0477ab554b7489ee52d3046398b9 .

National Academies of Sciences E, Medicine. Fostering integrity in research. National Academies Press; 2017.

Davis MS, Riske-Morris M, Diaz SR. Causal factors implicated in research misconduct: evidence from ORI case files. Sci Eng Ethics. 2007;13(4):395–414. https://doi.org/10.1007/s11948-007-9045-2 .

Article   Google Scholar  

Ampollini I, Bucchi M. When public discourse mirrors academic debate: research integrity in the media. Sci Eng Ethics. 2020;26(1):451–74. https://doi.org/10.1007/s11948-019-00103-5 .

Hesselmann F, Graf V, Schmidt M, Reinhart M. The visibility of scientific misconduct: a review of the literature on retracted journal articles. Curr Sociol La Sociologie contemporaine. 2017;65(6):814–45. https://doi.org/10.1177/0011392116663807 .

Martinson BC, Anderson MS, de Vries R. Scientists behaving badly. Nature. 2005;435(7043):737–8. https://doi.org/10.1038/435737a .

Loikith L, Bauchwitz R. The essential need for research misconduct allegation audits. Sci Eng Ethics. 2016;22(4):1027–49. https://doi.org/10.1007/s11948-016-9798-6 .

OECD. Revised field of science and technology (FoS) classification in the Frascati manual. Working Party of National Experts on Science and Technology Indicators 2007. p. 1–12.

Bouter LM, Tijdink J, Axelsen N, Martinson BC, ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity. Res Integrity Peer Rev. 2016;1(1):17. https://doi.org/10.1186/s41073-016-0024-5 .

Greenberg DS. Resounding echoes of Gallo case. Lancet. 1995;345(8950):639.

Dresser R. Giving scientists their due. The Imanishi-Kari decision. Hastings Center Rep. 1997;27(3):26–8.

Hong ST. We should not forget lessons learned from the Woo Suk Hwang’s case of research misconduct and bioethics law violation. J Korean Med Sci. 2016;31(11):1671–2. https://doi.org/10.3346/jkms.2016.31.11.1671 .

Opel DJ, Diekema DS, Marcuse EK. Assuring research integrity in the wake of Wakefield. BMJ (Clinical research ed). 2011;342(7790):179. https://doi.org/10.1136/bmj.d2 .

Wells F. The Stoke CNEP Saga: did it need to take so long? J R Soc Med. 2010;103(9):352–6. https://doi.org/10.1258/jrsm.2010.10k010 .

Normile D. RIKEN panel finds misconduct in controversial paper. Science. 2014;344(6179):23. https://doi.org/10.1126/science.344.6179.23 .

Wager E. The Committee on Publication Ethics (COPE): Objectives and achievements 1997–2012. La Presse Médicale. 2012;41(9):861–6. https://doi.org/10.1016/j.lpm.2012.02.049 .

SCImago nd. SJR — SCImago Journal & Country Rank [Portal]. http://www.scimagojr.com . Accessed 03 Feb 2021.

Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE. 2009;4(5):e5738. https://doi.org/10.1371/journal.pone.0005738 .

Steneck NH. Fostering integrity in research: definitions, current knowledge, and future directions. Sci Eng Ethics. 2006;12(1):53–74. https://doi.org/10.1007/PL00022268 .

DuBois JM, Anderson EE, Chibnall J, Carroll K, Gibb T, Ogbuka C, et al. Understanding research misconduct: a comparative analysis of 120 cases of professional wrongdoing. Account Res. 2013;20(5–6):320–38. https://doi.org/10.1080/08989621.2013.822248 .

National Academy of Sciences NAoE, Institute of Medicine Panel on Scientific R, the Conduct of R. Responsible Science: Ensuring the Integrity of the Research Process: Volume I. Washington (DC): National Academies Press (US) Copyright (c) 1992 by the National Academy of Sciences; 1992.

Bauchner H, Fontanarosa PB, Flanagin A, Thornton J. Scientific misconduct and medical journals. JAMA. 2018;320(19):1985–7. https://doi.org/10.1001/jama.2018.14350 .

COPE Council. COPE Guidelines: Retraction Guidelines. 2019. https://doi.org/10.24318/cope.2019.1.4 .

Retraction Watch. What should an ideal retraction notice look like? 2015, May 21. https://retractionwatch.com/2015/05/21/what-should-an-ideal-retraction-notice-look-like/ .

Fang FC, Steen RG, Casadevall A. Misconduct accounts for the majority of retracted scientific publications. Proc Natl Acad Sci USA. 2012;109(42):17028–33. https://doi.org/10.1073/pnas.1212247109 .

Resnik DB, Dinse GE. Scientific retractions and corrections related to misconduct findings. J Med Ethics. 2013;39(1):46–50. https://doi.org/10.1136/medethics-2012-100766 .

de Vries R, Anderson MS, Martinson BC. Normal misbehavior: scientists talk about the ethics of research. J Empir Res Hum Res Ethics JERHRE. 2006;1(1):43–50. https://doi.org/10.1525/jer.2006.1.1.43 .

Sovacool BK. Exploring scientific misconduct: isolated individuals, impure institutions, or an inevitable idiom of modern science? J Bioethical Inquiry. 2008;5(4):271. https://doi.org/10.1007/s11673-008-9113-6 .

Haven TL, Tijdink JK, Martinson BC, Bouter LM. Perceptions of research integrity climate differ between academic ranks and disciplinary fields: results from a survey among academic researchers in Amsterdam. PLoS ONE. 2019;14(1):e0210599. https://doi.org/10.1371/journal.pone.0210599 .

Trikalinos NA, Evangelou E, Ioannidis JPA. Falsified papers in high-impact journals were slow to retract and indistinguishable from nonfraudulent papers. J Clin Epidemiol. 2008;61(5):464–70. https://doi.org/10.1016/j.jclinepi.2007.11.019 .

Aubert Bonn N, Pinxten W. A decade of empirical research on research integrity: What have we (not) looked at? J Empir Res Hum Res Ethics. 2019;14(4):338–52. https://doi.org/10.1177/1556264619858534 .

Redman BK, Merz JF. Scientific misconduct: do the punishments fit the crime? Science. 2008;321(5890):775. https://doi.org/10.1126/science.1158052 .

Bülow W, Helgesson G. Criminalization of scientific misconduct. Med Health Care Philos. 2019;22(2):245–52. https://doi.org/10.1007/s11019-018-9865-7 .

Cyranoski D. China introduces “social” punishments for scientific misconduct. Nature. 2018;564(7736):312. https://doi.org/10.1038/d41586-018-07740-z .

Bird SJ. Publicizing scientific misconduct and its consequences. Sci Eng Ethics. 2004;10(3):435–6. https://doi.org/10.1007/s11948-004-0001-0 .

Gunsalus CK. Make reports of research misconduct public. Nature. 2019;570(7759):7. https://doi.org/10.1038/d41586-019-01728-z .

Gunsalus CK, Marcus AR, Oransky I. Institutional research misconduct reports need more credibility. JAMA. 2018;319(13):1315–6. https://doi.org/10.1001/jama.2018.0358 .

Download references

Acknowledgements

The authors wish to thank the EnTIRE research group. The EnTIRE project (Mapping Normative Frameworks for Ethics and Integrity of Research) aims to create an online platform that makes RE+RI information easily accessible to the research community. The EnTIRE Consortium is composed by VU Medical Center, Amsterdam, gesinn. It Gmbh & Co Kg, KU Leuven, University of Split School of Medicine, Dublin City University, Central European University, University of Oslo, University of Manchester, European Network of Research Ethics Committees.

EnTIRE project (Mapping Normative Frameworks for Ethics and Integrity of Research) has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement N 741782. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and affiliations.

Department of Behavioural Sciences, Faculty of Medicine, University of Debrecen, Móricz Zsigmond krt. 22. III. Apartman Diákszálló, Debrecen, 4032, Hungary

Anna Catharina Vieira Armond & János Kristóf Bodnár

Institute of Ethics, School of Theology, Philosophy and Music, Dublin City University, Dublin, Ireland

Bert Gordijn, Jonathan Lewis & Mohammad Hosseini

Centre for Social Ethics and Policy, School of Law, University of Manchester, Manchester, UK

Center for Medical Ethics, HELSAM, Faculty of Medicine, University of Oslo, Oslo, Norway

Center for Ethics and Law in Biomedicine, Central European University, Budapest, Hungary

Péter Kakuk

You can also search for this author in PubMed   Google Scholar

Contributions

All authors (ACVA, BG, JL, MH, JKB, SH and PK) developed the idea for the article. ACVA, PK, JKB performed the literature search and data analysis, ACVA and PK produced the draft, and all authors critically revised it. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Anna Catharina Vieira Armond .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

. Pilot search and search strategy.

Additional file 2

. List of Major and minor misbehavior items (Developed by Bouter LM, Tijdink J, Axelsen N, Martinson BC, ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity. Research integrity and peer review. 2016;1(1):17. https://doi.org/10.1186/s41073-016-0024-5 ).

Additional file 3

. Table containing the number and percentage of countries included in the analysis of articles.

Additional file 4

. Table containing the number and percentage of countries included in the analysis of the cases.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Armond, A.C.V., Gordijn, B., Lewis, J. et al. A scoping review of the literature featuring research ethics and research integrity cases. BMC Med Ethics 22 , 50 (2021). https://doi.org/10.1186/s12910-021-00620-8

Download citation

Received : 06 October 2020

Accepted : 21 April 2021

Published : 30 April 2021

DOI : https://doi.org/10.1186/s12910-021-00620-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research ethics
  • Research integrity
  • Scientific misconduct

BMC Medical Ethics

ISSN: 1472-6939

article on research misconduct

  • Open access
  • Published: 23 August 2022

Deceiving scientific research, misconduct events are possibly a more common practice than foreseen

  • Alonzo Alfaro-Núñez   ORCID: orcid.org/0000-0002-4050-5041 1 , 2  

Environmental Sciences Europe volume  34 , Article number:  76 ( 2022 ) Cite this article

4109 Accesses

1 Citations

24 Altmetric

Metrics details

Today, scientists and academic researchers experience an enormous pressure to publish innovative and ground-breaking results in prestigious journals. This pressure may blight the general view concept of how scientific research needs to be done in terms of the general rules of transparency; duplication of data, and co-authorship rights might be compromised. As such, misconduct acts may occur more frequently than foreseen, as frequently these experiences are not openly shared or discussed among researchers.

While there are some concerns about the health and the transparency implications of such normalised pressure practices imposed on researchers in scientific research, there is a general acceptance that researchers must take and accept it in order to survive in the competitive world of science. This is even more the case for junior and mid-senior researchers who have recently started their adventure into the universe of independent researchers. Only the slightest fraction manages to endure, after many years of furious and cruel rivalry, to obtain a long-term, and even less probable, permanent position. There is an evil circle; excellent records of good publications are needed in order to obtain research funding, but how to produce pioneering research during these first years without funding? Many may argue this is a necessary process to ensure good quality scientific investigation, possibly, but perseverance and resilience may not be the only values needed when rejection is received consecutively for years.

There is a general culture that scientists rarely share previous bad experiences, in particular if they were associated to misconduct, as they may not be seen or considered as a relevant or hot topic to the scientific community readers. On next, a recent misconduct experience is shared, and a few additional reflections and suggestions on this topic were drafted in the hope other researchers might be spared unnecessary and unpleasant times.

Scientists are under great pressure to publish not only high-quality research, but also a larger number of publications, the more the merrier, within the first years of career in order to survive in the competitive world of science. This pressure might mislead young less experienced researchers to take “shortcuts” that may consequently mislead to carry out misconduct actions. The aim of this article is not just trying to report a case of misconduct to the concerned stakeholders, but also to the research community as a whole in the hope other researchers might avoid similar experiences. Moreover, some basic recommendations are shared to remind the basic rules of transparency, duplication of data and authorship rights to avoid and prevent misconduct acts based on existing literature and the present experience.

Welcoming collaboration

During the first months of 2021, already in the second year of the COVID-19 pandemic with most European research institutes and labs still in lockdown [ 1 ], and all over the world, I received an email from a young researcher overseas. This young fellow is based in Bangladesh, South Asia, in a country in which I have never collaborated before. He was interested in a potential collaboration with many ideas, and proved to be a very energetic person writing me on a daily basis and even several times a day during the first weeks.

There were obviously some suspicions about the nature of this collaboration, but the general and basic background check out was done, and this fellow seemed to be legitimate. Thus, after a few weeks of discussing back and forth research ideas, I welcomed the collaboration. Thereafter, for the first few months many ideas were elaborated and discussed, and so we began to draft two review manuscripts simultaneously. In no time, it felt like a potential and long-standing collaboration was born. However, it also required additional time because of the linguistic and cultural barrier. It appeared that sometimes the main message was getting lost in translation, and it was reflected in the text on the various manuscript versions. We repetitively argued about the importance of transparency, the correct use of data previously published and the general rules of authorship and citation, especially when producing a new review document. Nevertheless, these errors were corrected and he guaranteed to have full understanding, and I trusted.

After some time, enthusiasm started to decline and the highly motivated collaborator started to rush to complete the work regardless of the quality, especially as a third manuscript was now also in play. I was not willing to sacrifice quality, so I started using more of my personal time to complete the different manuscripts, I felt committed. After six months or so, the first of the three manuscripts was ready, and the process of submission started to a high-impact peer-review journal to a special issue on a topic where I had been invited months ago. A few months later, the second manuscript followed the same steps.

By the middle of April 2022, the first of the manuscripts had just been accepted; the second one was already in its second round of review, and the third and last of the manuscripts was ready for submission. I cannot deny the satisfaction felt of a good job properly done in a time record (for my personal standards).

Deceptive surprise

Through the last hours, before submitting our final manuscript, the mandatory final inspection was done. However, I noticed something odd, two new citations had been added in the last minute, and I did not approve that change. Even more curious, the two citations had the new collaborator’s name on it. Immediately, I searched for the two mysterious documents, a book chapter and another peer-reviewed publication were the result. To my surprise, the titles of these two new works were very similar and somehow nearly identical to the topic we had just finished and his name appeared as the first author. Both documents were not open access and had recently been published, one of them less than a week old. Furthermore, our manuscript, the same document I was supposed to submit that same day, had six figures and four tables, all generated by our collaborative work. The book chapter had exactly the same figures and tables just in a different order, but the data and content were nearly identical. The text redaction was different, and there were also some other co-authors from his same region, but the content and background idea was the same.

During the next hours, I went back to the other two manuscripts. Indeed, all my fears were right. My new collaborator had systematically been committing fraud, replicating manuscripts using the same data and publishing by himself using my very ideas and sentences.

I confronted him; I wanted to receive an explanation, a reason for these actions. I copied all other co-authors in these communications. The three manuscripts had built international collaboration, and other parties had actively participated, and now we all were compromised. The first reaction received was that he was not aware that was an illegal action, and then, silence. No satisfactory answer was ever received, and more importantly, it seemed some of the other co-authors did not care, nor were surprised.

The aftermath of deception

In the next coming days, I redacted several email letters describing the misconduct situation to the different journal’s editors, preprint services and especially to the main affiliations of this fraudulent person. The two manuscripts were withdrawn from the respective journals right away. Together with the third manuscript, none of the documents will ever be published. There is a long history and documentation showing that withdraws and retractions of scientific manuscripts may be the most relevant form of silently reporting scientific misconduct [ 2 , 3 ], and now I was part of it. Editors from the journals and editorial houses where the duplicated documents had been published responded to investigate the case. However, after several months of waiting, and despite the multiple complain letters providing all the evidence to prove the misconduct act, no official sanctions have been taken by any of the journals and the documents remain still available online. Editors have the responsibility to pursue scientific misconduct in submitted or published manuscripts; however, editors are not responsible for conducting investigation or deciding whether a scientific misconduct occurred [ 4 ].

The preprint services response was very clear and conclusive, regardless of the evidence provided, the documents published online in their preprint format cannot and will not be removed. Now our names will remain associated with this person to posterity, another wonderful discovery. Release of early results in the format of preprints without going through the process of peer-review is an old well known issue of concern [ 5 , 6 , 7 ]. For the last few years I have been in favour and accepting the early release of preprint publications, this new experience has made me reconsider and change entirely this position. I find unacceptable that in spite of providing all evidence of research misconduct, fraud and duplication of data especially, a retraction of a preprint document is not possible for most preprint services available.

As for the consequences or sanctions imposed on this “researcher” by his own affiliate institutions, it also remains unknown as no reply or answer has been received until now. Additionally, some of his personal collaborators also included as co-authors during the editing process of the manuscripts, as it was claimed they “intellectually contributed” to the study, contacted me during the first weeks after withdrawing. These collaborators were unhappy about the decision taken, and complained asking: ‘‘ what is it really necessary to retract the documents entirely, in particular one manuscript already accepted and a second one in-review? Why was not this decision put into a vote among the co-authors?” They did not considered to be an enough reason for withdrawing and claimed, “ It had been a rush and wrong decision” . The answer was simple, it was a clear research misconduct act and the data has been duplicated and misused, my decision could not be clouded by the grief of losing three publications. Besides, I was the last author and corresponding author for all three manuscripts, and thus, the responsibility and final decision relied on me. Furthermore, and as a curious additional detail, all editors associated to the journals where the two-duplicated manuscripts were published, all are as well from the same region as this person. All these facts together allow me to reach the conclusion that misconduct practices may be relatively more common in some other parts of the world, and the research culture may play an important role in this type of practices, but we are still afraid to discuss about it [ 8 ]. There are no rigorous or systematic controls to regulate that one unique person can manipulate, duplicate with slight modifications in the text, and publish the same datasets in different journals, especially if the time between submissions is minimal. There are thousands of journals with many more thousands of editors in an infinite number of online platforms. Decisions over whether to retract or modify a study are more likely to take years than months, this time could potentially harmfully misinform [ 9 ] and damage the reputation of researchers [ 3 ] if any sanction is taken at all by the end [ 10 ]. Based on the previous rationale, this author who duplicated our work and published by himself may simply get away with it, two fraudulent copy/paste extra publications and zero consequences.

Hundreds of hour’s work and nearly a year of effort were lost in an instant. As many others, I believe I work and interact with researchers sharing similar values of honesty, openness and accountability pursuing to establish as an independent researcher to produce good science work. Yet every aspect of science, from the framing of a research idea to the publication of a manuscript, is susceptible to influences that can lead to misconduct [ 11 ]. By withdrawing at once three manuscripts, now associated to misconduct practices, my research colleagues and I will suffer the consequences of the current academia culture of “publish or perish” [ 12 ].

Recommendations to avoid unpleasant research events

With two official retractions across the editorial offices of two major journals and three preprint documents that I cannot rig out, all associated to fraud and scientific misconduct; I am probably the less qualified person with the least authority to provide any feedback and even less, a short list of recommendations to prevent misconduct in research. Nevertheless, here I am. There are many general guidelines and basic rules to prevent, avoid and report misconduct actions [ 3 , 13 , 14 , 15 ], the interested readers can get more information below in the reference list if they want to explore deeper into this. Using these guidelines as the main backbone, a short list of three main recommendations is presented in the lines below.

The first and possibly most important recommendation, despite the previous shared experience; always welcome collaboration after a well-throughout background check. This may sound contradictory, but contemporary science is based on collaboration and the interdisciplinary combination of fields [ 16 ], one bad experience and one “rotten apple” cannot disrupt the development of scientific research. Of course, it is mandatory to be vigilant and to carefully investigate the background interests [ 9 ] and history of each new door that opens along the way. Welcome collaboration cautiously.

A second recommendation, to investigate the institution and location of the new coming collaborations. As stated above, the cultural background [ 8 ], and thus, the location of these new collaboration institutions may play a very important role in the final outcome. Most countries across Europe and in the U.S. have well-defined guidelines [ 3 , 10 ], which varied a lot about each principle and at the end are regulated by each institution research policies. However, there may be regions across the world where policies and regulations concerning misconduct actions and the implications and consequences are yet not well established [ 17 ]. Avoid those.

My third recommendation, and possibly the most relevant of all, do not take for granted that the other researchers are fully aware that some actions may lead to misconduct. My biggest mistake was to believe that other researchers knew or cared about the basic rules of duplication of data, transparency and respect of authorship rights. Ignorance still accounts for a large portion of the research misconduct actions [ 11 , 18 ]. Never assume that others know and respect the broad spectrum of misconduct actions.

Two additional personal recommendations. Stay away from review manuscripts and book chapters, avoid them at all cost. Consider very carefully sharing your manuscript results in the format of an early release preprint online publication.

Conclusions

There is so much to modify in the existing science research environment to avoid situations like this to continue or ever happen again. Young scientists need to be inspired and motivated to produce by example based on principles of integrity, ethical values, transparency and respect, and not by current trend of rejection and extreme pressure. Dealing with the research pressure to secure external funds and to publish in top-tier journals stand as the most common stressors that contribute to research misconduct [ 15 , 19 ]. The same research culture that creates this pressure for publishing and obtaining funds, it also contributes to the behaviour practice of silence that leads to ignore and avoid the topic of misconduct in research. While there is a general concern and scientific journals attempt to take situations like this seriously, there should also be a more open space to share and inform junior and even senior researchers about this kind of predatory stealing research practices.

Manipulation and duplication of data to inflate academic records is a desperate and shameless act, and it truly represents scientific misconduct and fraud. Unfortunately, there is a general trend with an increase in misconduct in research [ 13 ], which ultimately account for the majority of withdrawals in modern scientific publications [ 20 ]. I would like to believe that even good people could do bad things when extreme pressure is received. Nevertheless, would this justify misconduct and fraud? Never!

Availability of data and materials

Not applicable.

Engzell P, Frey A, Verhagen MD (2021) Learning loss due to school closures during the COVID-19 pandemic | PNAS. Proc Natl Acad Sci U S A. https://doi.org/10.1073/pnas.2022376118

Article   Google Scholar  

Lafollette MC (2000) The evolution of the “scientific misconduct” issue: an historical overview (44535C). Proc Soc Exp Biol Med. https://doi.org/10.1177/153537020022400405

Hesselmann F, Graf V, Schmidt M, Reinhart M (2017) The visibility of scientific misconduct: a review of the literature on retracted journal articles. Curr Sociol 65:814–845. https://doi.org/10.1177/0011392116663807

Integrity OOR (2000) Managing allegations of scientific misconduct: a guidance document for editors. J Child Neurol 15:609–613. https://doi.org/10.1177/088307380001500907

Teixeira da Silva JA (2018) The preprint debate: What are the issues? Med J Armed Forces India 74:162–164. https://doi.org/10.1016/j.mjafi.2017.08.002

King A (2020) Fast news or fake news? EMBO Rep 21:e50817. https://doi.org/10.15252/embr.202050817

Article   CAS   Google Scholar  

Moore CA (1965) Preprints an old information device with new outlooks. J Chem Docu. https://doi.org/10.1021/c160018a003

Davis MS (2010) The role of culture in research misconduct. Account Res. https://doi.org/10.1080/714906092

Grey A, Bolland MJ, Avenell A et al (2020) Check for publication integrity before misconduct. Nature 577:167–169. https://doi.org/10.1038/d41586-019-03959-6

Resnik DB, Rasmussen LM, Kissling GE (2014) An international study of research misconduct policies. Account Res. https://doi.org/10.1080/08989621.2014.958218

Gunsalus CK, Robinson AD (2018) Nine pitfalls of research misconduct. Nature 557:297–299. https://doi.org/10.1038/d41586-018-05145-6

Fanelli D (2010) Do pressures to publish increase scientists’ bias? An empirical support from us states data. PLoS ONE 5:e10271. https://doi.org/10.1371/journal.pone.0010271

Buela-Casal G (2014) Pathological publishing: a new psychological disorder with legal consequences? Eur J Psychol Appl Leg Context. https://doi.org/10.1016/j.ejpal.2014.06.005

Shaw DM, Erren TC (2015) Ten simple rules for protecting research integrity. PLOS Comput Biol 11:e1004388. https://doi.org/10.1371/journal.pcbi.1004388

Holtfreter K, Reisig MD, Pratt TC, Mays RD (2019) The perceived causes of research misconduct among faculty members in the natural, social, and applied sciences. Stud High Educ. https://doi.org/10.1080/03075079.2019.1593352

Andersen H (2016) Collaboration, interdisciplinarity, and the epistemology of contemporary science. Stud Hist Philos Sci Part A 56:1–10. https://doi.org/10.1016/j.shpsa.2015.10.006

Khadilkar SS (2018) Scientific misconduct: a global concern. J Obstet Gynecol India 68:331–335. https://doi.org/10.1007/s13224-018-1175-8

Fernández Pinto M (2018) Scientific ignorance: probing the limits of scientific research and knowledge production. Theoria: an international journal for theory. Hist Found Sci 34:195–211

Google Scholar  

DuBois JM, Anderson EE, Chibnall J et al (2013) Understanding research misconduct: a comparative analysis of 120 cases of professional wrongdoing. Account Res. https://doi.org/10.1080/08989621.2013.822248

Fang FC, Steen RG, Casadevall A (2012) Misconduct accounts for the majority of retracted scientific publications | PNAS. Proc Natl Acad Sci U S A. https://doi.org/10.1073/pnas.1212247109

Download references

Acknowledgements

Special thanks to Esther Agnete Jensen, Therese Kronevald, Stina Christensen, Aksel Skovgaard, Morten Juel, Jesper Clausager Madsen and Alonso A. Aguirre for their support and advice. The author would also like to thank the three anonymous reviewers for their comments, feedback and improvements.

This research received support from the Department of Clinical Biochemistry at Naestved Hospital, Region Sjaelland.

Author information

Authors and affiliations.

Department of Clinical Biochemistry, Naestved Hospital, Ringstedgade 57a, 4700, Naestved, Denmark

Alonzo Alfaro-Núñez

Section for Evolutionary Genomics, GLOBE Institute, University of Copenhagen, Øster Farimagsgade 5, 1353, Copenhagen K, Denmark

You can also search for this author in PubMed   Google Scholar

Contributions

The author read and approved the final manuscript.

Authors’ information

Web of Science Researcher ID H-2972-2019.

Corresponding author

Correspondence to Alonzo Alfaro-Núñez .

Ethics declarations

Ethics approval and consent to participate, consent for publication.

The author gives full consent for publication.

Competing interests

The author declares no competing of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Alfaro-Núñez, A. Deceiving scientific research, misconduct events are possibly a more common practice than foreseen. Environ Sci Eur 34 , 76 (2022). https://doi.org/10.1186/s12302-022-00659-3

Download citation

Received : 26 April 2022

Accepted : 17 July 2022

Published : 23 August 2022

DOI : https://doi.org/10.1186/s12302-022-00659-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Ethical values
  • Transparency
  • Scientific fraud
  • Research misconduct and respect

article on research misconduct

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Research misconduct in health and life sciences research: A systematic review of retracted literature from Brazilian institutions

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Department of Nursing, College of Health Sciences, University of Brasilia, Brasília, Federal District, Brazil

ORCID logo

Roles Data curation, Funding acquisition, Investigation, Methodology, Writing – review & editing

Roles Data curation, Formal analysis, Writing – review & editing

Affiliation Department of Statistics, Telecomunicações do Brasil – Telebrás, Brasília, Federal District, Brazil

Contributed equally to this work with: Fábio Zicker, Maria Rita Carvalho Garbi Novaes, César Messias de Oliveira

Roles Writing – review & editing

Affiliation Center for Technological Development in Health, Oswaldo Cruz Foundation, Brasília, Federal District, Brazil

Affiliation Department of Nursing, College of Health Sciences, Health Sciences Education and Research Foundation – ESCS/Fepecs, Brasília, Federal District, Brazil

Affiliation Department of Epidemiology & Public Health, Institute of Epidemiology & Health Care, University College London, London, United Kingdom

Roles Conceptualization, Funding acquisition, Project administration, Supervision, Writing – review & editing

  • Rafaelly Stavale, 
  • Graziani Izidoro Ferreira, 
  • João Antônio Martins Galvão, 
  • Fábio Zicker, 
  • Maria Rita Carvalho Garbi Novaes, 
  • César Messias de Oliveira, 
  • Dirce Guilhem

PLOS

  • Published: April 15, 2019
  • https://doi.org/10.1371/journal.pone.0214272
  • Reader Comments

Fig 1

Measures to ensure research integrity have been widely discussed due to the social, economic and scientific impact of research integrity. In the past few years, financial support for health research in emerging countries has steadily increased, resulting in a growing number of scientific publications. These achievements, however, have been accompanied by a rise in retracted publications followed by concerns about the quality and reliability of such publications.

This systematic review aimed to investigate the profile of medical and life sciences research retractions from authors affiliated with Brazilian academic institutions. The chronological trend between publication and retraction date, reasons for the retraction, citation of the article after the retraction, study design, and the number of retracted publications by author and affiliation were assessed. Additionally, the quality, availability and accessibility of data regarding retracted papers from the publishers are described.

Two independent reviewers searched for articles that had been retracted since 2004 via PubMed, Web of Science, Biblioteca Virtual em Saúde (BVS) and Google Scholar databases. Indexed keywords from Medical Subject Headings (MeSH) and Descritores em Ciências da Saúde (DeCS) in Portuguese, English or Spanish were used. Data were also collected from the Retraction Watch website ( www.retractionwatch.com ). This study was registered with the PROSPERO systematic review database (CRD42017071647).

A final sample of 65 articles was retrieved from 55 different journals with reported impact factors ranging from 0 to 32.86, with a median value of 4.40 and a mean of 4.69. The types of documents found were erratum (1), retracted articles (3), retracted articles with a retraction notice (5), retraction notices with erratum (3), and retraction notices (45). The assessment of the Retraction Watch website added 8 articles that were not identified by the search strategy using the bibliographic databases. The retracted publications covered a wide range of study designs. Experimental studies (40) and literature reviews (15) accounted for 84.6% of the retracted articles. Within the field of health and life sciences, medical science was the field with the largest number of retractions (34), followed by biological sciences (17). Some articles were retracted for at least two distinct reasons (13). Among the retrieved articles, plagiarism was the main reason for retraction (60%). Missing data were found in 57% of the retraction notices, which was a limitation to this review. In addition, 63% of the articles were cited after their retraction.

Publications are not retracted solely for research misconduct but also for honest error. Nevertheless, considering authors affiliated with Brazilian institutions, this review concluded that most of the retracted health and life sciences publications were retracted due to research misconduct. Because the number of publications is the most valued indicator of scientific productivity for funding and career progression purposes, a systematic effort from the national research councils, funding agencies, universities and scientific journals is needed to avoid an escalating trend of research misconduct. More investigations are needed to comprehend the underlying factors of research misconduct and its increasing manifestation.

Citation: Stavale R, Ferreira GI, Galvão JAM, Zicker F, Novaes MRCG, Oliveira CMd, et al. (2019) Research misconduct in health and life sciences research: A systematic review of retracted literature from Brazilian institutions. PLoS ONE 14(4): e0214272. https://doi.org/10.1371/journal.pone.0214272

Editor: Angeliki Kerasidou, University of Oxford, UNITED KINGDOM

Received: June 22, 2018; Accepted: March 11, 2019; Published: April 15, 2019

Copyright: © 2019 Stavale et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the manuscript and its Supporting Information files.

Funding: Researchers involved in this review were supported by a grant from the following agencies: the Federal District Research Foundation – FAPDF (1629 018); Coordination for the Improvement of Higher Education Personnel - CAPES Brazil (1651856), Special Programme for Research and training in Tropical Diseases –TDR/WHO (B20359), UNIEURO, and the Brazilian National Council for Scientific and Technological Development – CNPq. These supporting institutions did not contribute to the study design, data collection or analysis, manuscript writing or publishing. JAMG is employed by Telecomunicações do Brasil – Telebrás. Telecomunicações do Brasil – Telebrás provided support in the form of salary for author JAMG, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific role of this author is articulated in the ‘author contributions’ section.

Competing interests: JAMG is employed by Telecomunicações do Brasil – Telebrás. There are no patents, products in development or marketed products to declare. This does not alter our adherence to all the PLOS ONE policies on sharing data and materials.

Introduction

Research integrity relies on rigorous methodological approaches during planning, conduct, documentation and reporting of studies [ 1 ]. Practices known to harm these steps are classified as research misconduct [ 2 ], [ 3 ]. It has become more common for studies addressing the impact of misconduct to be published as a warning to the scientific community [ 4 ], [ 5 ], [ 6 ]. In 2012, Fang and colleagues conducted a systematic review of retracted publications in the field of biomedical and life sciences using PubMed. Their findings showed that most of the retractions were due to fraud, and they addressed the impact of these findings since these studies are mainly publicly funded [ 4 ].

Research misconduct occurs when plagiarism, data manipulation, fabrication, poor study reporting, and lack of transparency are part of the scientific process [ 2 ]. These acts have been found to compromise the validity and reliability of research results [ 7 ], [ 8 ], [ 9 ]. On many occasions, these faults have led to a retraction notice. The publication of retraction notices intends to alert readers to serious errors—unintentional or of misconduct nature—that result in unreliable conclusions [ 7 ]. The purpose of retraction notices is also to avoid the use of these studies as a basis for future investigations, except for research about scientific integrity itself. Additionally, retractions are an important tool to evaluate scientific production, and the study of retractions supports measures to avoid error and misconduct.

Misconduct has scientific, social and economic impacts [ 5 ], [ 8 ], [ 10 ]. Economically, it has been estimated that billions of dollars have been wasted on funding studies based on retracted publications [ 11 ]. Socially, it affects evidence-based medicine by exposing study volunteers and the population as a whole to wrong medical decisions [ 10 ]. Scientifically, further investigations based on unreliable findings and unethical research leads to untrustworthy conclusions, compromising the advances of scientific knowledge [ 9 ], [ 12 ], [ 13 ]. Therefore, corrupted research conducts may generate a chain of misconduct [ 6 ], [ 10 ].

Financial support for health and life sciences research has steadily increased in Brazil, which has been followed by a rising number of scientific publications. Simultaneously, there have been a growing number of retracted publications, raising concerns about the quality and reliability of these articles. The first retraction reported in health and life sciences from Brazilian institutions was a paper about nursing that was published in 2004 [ 14 ]. At the time, the author admitted to plagiarism. Since then, other cases of research misconduct have been discovered, generating apprehension about the scientific advances in the country.

Brazil is a member of the BRICS (Brazil, Russia, India, China, South Africa) cooperative group that is responsible for some of the 1% most cited publications in the world [ 15 ]. Although the citation impact of the country is below the global average, it increased 15% in the past six years [ 15 ]. The publications with higher impact ratings were performed mainly in collaboration with other institutions from the BRICS. The scientific influence of the country, as well as its participation in collaboration funds and networks for promoting health research, is growing worldwide [ 15 ].

The understanding of research integrity and research misconduct varies institutionally and culturally [ 16 ], [ 17 ], [ 18 ], so it is important to understand the factors underlying the retractions of Brazilian scientific publications and the notable increase in retractions.

Despite the relevance of research misconduct and the awareness of breaches of research integrity, the analysis of retracted publications in Brazil is quite new. In this context, this systematic review proposed the following research question: What are the main reasons for retracted publications in the field of health and life sciences that were published by researchers who are affiliated with Brazilian institutions? Answering this research question will pave the way for future investigations about research integrity in Brazil by recognizing the particularities of the country.

This review intended to characterize the underlying causes of retraction, to assess the extent of research misconduct, to support discussions of possible solutions, and ultimately, to promote further investigations. To carry out this review, data were collected regarding reasons for retraction, temporal trends from publication to retraction, citation pattern after retraction, and the impact factors and ethical guidelines endorsements of the journals. Additionally, this review evaluated the quality of retraction notices considering whether complete information was provided in accordance with the COPE guidelines [ 1 ]–a fundamental aspect of research transparency.

Materials and methods

Protocol and registration.

This review protocol was registered with PROSPERO (CRD42017071647).

Information source

The screening of eligible publications was performed from late July to early August 2017 in accordance with the preapproved registered protocol.

Search strategy

Details of the search strategy are available via the following link: https://www.crd.york.ac.uk/PROSPEROFILES/71647_STRATEGY_20170610.pdf .

Study selection

For this review, retraction notices that were published from January 2004 until August 2017 regarding articles that had at least one author that was affiliated with a Brazilian institution, irrespectively to their authorship position and regardless of the publication year of the original article, were selected. The start date was the publication year of the first retracted article in nursing science that was written by authors affiliated with a Brazilian institution [ 14 ].

Studies in the field of life and health sciences following the Brazilian National Council for Scientific and Technological Development , CNPq (from the Portuguese, Conselho Nacional de Desenvolvimento Científico e Tecnológico), classification [ 19 ] that were published in English, Portuguese or Spanish in national or international journals were eligible for this review.

Despite their study design, all retracted articles, with complete or incomplete retraction notice information according to the Committee of Publication Ethics (COPE) guidelines [ 2 ], were eligible for this review when they were in accordance with the protocol. Retraction notices, articles with a retraction notice attached or any sort of information indicating a retraction were considered for data collection. Studies regarding research integrity were excluded, as well as studies related to other fields of scientific knowledge.

Sampling and data collection process

Two independent reviewers searched for retracted articles via the PubMed, Web of Science and Brazilian Virtual Library of Health (BVS) databases. Google Scholar and the Retraction Watch [ 20 ] website were searched to identify additional publications and gray literature. The last database is an open access portal reporting retracted papers worldwide. The results were compared, and a consolidated list of retracted articles was produced according to the protocol.

Data were collected and analyzed according to reason for retraction, time trend from publication to retraction, citation pattern after retraction, journal impact factor, quality of retraction notice information, author’s affiliation and adherence to either COPE or CONSORT guidelines on ethics and standard reporting.

Data collection rationale

  • Publication year and retraction year trend : The time between the date of publication and the date of retraction was calculated in years. Articles published and retracted in the same year were considered to have a time of 0. Publications without complete information regarding these dates were labeled as “not applicable” for this analysis.
  • Author’s affiliation : This analysis was limited to one author per paper. Data were collected from the last authors because they are typically responsible for mentoring and supervising the research planning, conduct and reporting [ 21 ]. Three articles were excluded from this analysis because the last author was not affiliated with a Brazilian institution.
  • Journal’s name and impact factor (IF) : The impact factor over the last 5 years was collected from Thompson and Reuters’s indicators. Previous research has shown an increase in the citation of retracted papers when they were published in high impact journals [ 9 ]. This review investigated whether the same pattern exists in Brazilian publications.
  • Ethical and reporting guidelines endorsement : It was assumed that journals endorsed by either CONSORT or COPE guidelines followed ethical guidelines.
  • Area of study : The health and life sciences were categorized into the following sub groups: medical science, biological science, nutrition, dentistry, sports science, nursing science, physiotherapy, and pharmacology.
  • Retraction indicator : The presentation of retraction notices or retracted articles reflected how editors and databases did or did not facilitate their visibility. Transparency was ensured when retraction notices were attached to the original article and had a clear warning of retraction/withdrawal.
  • Reasons for retraction : The reasons for retraction were classified as a) error (inappropriate study design, data collection or report); b) fraud (data or image manipulation); c) author’s dispute (publications without the consent or recognition of all authors, sponsors or industry manufacturers of the tested product); d) duplicated publication (when authors or editors published the same article more than once); e) irregular citation pattern or citation staking (artifice used to upgrade the impact factor of a journal); f) unknown (reason for retraction was not mentioned); g) plagiarism (image, text or unspecified forms of plagiarism) and; h) no informed consent was obtained for the use and publication of images of participants.
  • Retracted by : Retraction notices are expected to acknowledge who retracted the article. Retractions by authors indicate good faith and are considered as retractions due to an honest mistake. Retractions by editors, depending on the reason, may indicate honest mistakes from the editorial board or misconduct from the authors.
  • Retraction endorsement by authors : Authors usually participate and/or agree with the wording of the retraction. Report of participation of authors and their endorsement indicates transparency of the retraction process.
  • Citation pattern of retracted articles : The number of times an article has been cited reflects its visibility and possible impact on the scientific community [ 22 ]. Therefore, the citation pattern before and after retraction was analyzed by calculating the mean citations per year from the date of publication to the date of retraction for each article. Similarly, the mean citations per year from the date of retraction to 2017 were also calculated. For comparison purposes, articles with a higher mean number of citation per year before retraction were considered to have a positive-citation pattern , while those with a higher mean number of citations per year after retraction were considered to have a negative-citation pattern .
  • Quality of retraction notices : According to the COPE recommendations [ 2 ], [ 7 ], retraction notices must contain: the date of retraction, motives for the retraction, whether the retraction was endorsed by the authors, who requested the retraction, and the proper citation of the original article in the retraction notice. A complete report of this information accounts for a high-quality retraction notice.

The PRISMA statement checklist was used to assure the quality of this systematic review. The checklist is provided as S1 Table . Some topics did not apply for this study considering that this review evaluated only retraction notices and excluded the original articles. Consequently, the methods used to assess the risk of bias of the individual studies, summary measures, synthesis of results and risk of bias across studies was not used.

Statistical analysis

The Shapiro-Wilk normality test was conducted for the citation pattern before and after retraction and the correlation between the citation pattern and the impact factor of the journals. These variables exhibited a non normal distribution. Hence, the Spearman correlation test and a descriptive analysis were performed using the R statistical program version 3.4.2 and Excel for Mac 2011, version 14.4.3. S1 File of the conducted tests is available.

Retraction notice selection

A final sample of 65 retracted articles was retrieved ( Fig 1 ) from 55 different journals with an impact factor range of 0–32.86 and a mean impact factor of 4.7. The types of documents that were included were erratum (n = 1), retracted article (n = 3), retracted article with its retraction notice attached (n = 5), retraction notice with erratum (n = 3) and retraction notice (n = 45). The search using the Retraction Watch Blog [ 13 ] added 8 articles that were not identified by the search strategy using the bibliographic databases.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

Study selection flowchart showing initial number of records to final sample retrieved for analysis.

https://doi.org/10.1371/journal.pone.0214272.g001

The retracted publications covered a wide range of studies. Experimental studies (n = 40) and literature reviews (n = 15) accounted for 84.6% of the included articles Table 1 . Studies conducted in the field of medical science accounted for 52% of the retrieved articles. Medical science was the field with the largest number of retractions (n = 34) followed by biological sciences at 26% (n = 17), dentistry 7.7% (n = 5), sports sciences at 3% (n = 2), pharmacology at 3% (n = 2), nutrition at 1.5% (n = 1), nursing sciences at 1.5% (n = 1), and physiotherapy at 1.5% (n = 1).

thumbnail

https://doi.org/10.1371/journal.pone.0214272.t001

Ethical and standard reporting guidelines.

Out of the 65 journals with published retracted notices, only 7 clearly complied with the COPE and CONSORT guidelines. A total of 41.5% of the selected journals were not member of COPE or part of CONSORT’s list. Still, reference to these two main ethical and reporting guidelines recommendations was found in the Guide for Authors section of these journals.

Affiliation, number of retractions and area of study of the authors.

A total of 26 Brazilian institutions had at least one research article retracted. Of these institutions, 20 (77%) were public institutions, 5 (19%) were private institutions and 1 (4%) was a nonprofit organization. The University of São Paulo was the institution with the highest number of retracted publications (n = 17), followed by the University of Campinas (n = 16). Both are leading Brazilian academic institutions with the highest scientific productivity [ 15 ]. Of the 62 articles analyzed, 48 (77.4%) were published by authors affiliated with institutions located in southeastern Brazil. The University of Campinas (São Paulo) also accounted for the highest number of retractions by author Table 2 . The largest number of postgraduate programs in the country is concentrated in the southeastern region of Brazil [ 23 ]. One author had 8 retractions during the studied period. Plagiarism was the main cause for retractions related to the two authors with most retractions that were affiliated with this university Table 3 .

thumbnail

https://doi.org/10.1371/journal.pone.0214272.t002

thumbnail

https://doi.org/10.1371/journal.pone.0214272.t003

Time trend between publication and retraction.

The time to retraction varied from 0 to 19 years. Five retraction notices (7.7%), 3 from 2011 and 2 from 2012, did not specify the year of retraction. In 2017, one article was retracted less than a year after it was published ( Fig 2 ).

thumbnail

Graphic representation: The distribution of number of articles by reason for retraction. Plagiarism is categorized under: a) unknown (purple bar), b) plagiarism of text (blue bar), c) plagiarism of image (light green bar).

https://doi.org/10.1371/journal.pone.0214272.g002

The overall mean time to retraction was 3,36 years. Most articles (55%) took from one to three years from the time of publication to be retracted. Data showed the number of retracted articles increased significantly starting in 2012, the start point of this review.

Number of citations after retraction

The analysis of post retraction citations is a proxy assessment of the influence of articles on scientific activity despite of their retraction. A total of 37% of the retrieved articles had a positive-citation pattern ; meanwhile, 63% had a negative-citation pattern . The most cited article with a negative-citation-pattern was published in 2007 and was retracted in 2016 [ 24 ]. Thus far, it has received a total of 490 citations and of these, 58 were from after the retraction of the article.

Association between impact factor and post retraction citation number

There was a strong positive correlation between the number of citations/year of an article after its retraction and the impact factor of the respective journal responsible for its retraction notice (Spearman rho = 0.69, p<0.05). The details of this analysis can be found in the S1 File .

Association between the impact factor and the number of citations before the retraction

There was a moderate correlation between the number of citations/year of an article before its retraction and the impact factor of the journal in which it was published (Spearman rho = 0.43, p<0.05).

This review sample size did not allow for a multivariate analysis. The details of this analysis can be found in the S1 File .

Quality of data from the retraction notices

Retraction notices are supposed to cite the original article [ 7 ]. However, our results showed that a proper citation of the original article was present in only 22 (33%) retraction notices; 42 retraction notices did not cite the original article; 1 article was cited three times in its retraction, implying that the retraction notice applied to more than one publication. Missing data were found in 57% of the retraction notices retrieved. Missing information in retraction notices was mainly regarding: date of retraction (7%), reason for retraction (7%), who requested the retraction (3%) and endorsement by the authors (38.4%). Retraction warnings such as a withdrawn/retracted red sign over the article were also nonexistent (37%).

Reasons for retraction

The identified reasons for retraction are illustrated at Fig 2 . Thirteen articles (20%) were retracted for at least two distinct reasons. Fraud was responsible for the retraction of three articles: two were retracted for image manipulation [ 16 ], [ 17 ] and one for data manipulation. Errors were attributed to inappropriate statistical analysis (n = 4), study design (n = 2) and inadequate data collection (n = 6). Retractions for duplicated publications were attributed to authors in 71% of the cases and to editors in 4,6% of the cases. Although an author’s dispute should not lead to a retraction [ 6 ], two articles accounted for retraction due to an author’s dispute. However, there is no additional information available for these retractions; therefore, it is not possible to assume this was the only reason for the retraction.

The comprehension of research integrity and the consequences of misconduct varies between different cultures [ 16 ], [ 17 ], [ 18 ]. Likewise, the concept of research integrity and research misconduct differ from institution to institution [ 2 ], [ 3 ]. In general, all institutions agree that fabrication, fraud and plagiarism negatively affect science to some extent, characterizing research misconduct [ 3 ], [ 13 ]; although, misconduct can have a wider definition [ 2 ]. Research integrity refers to a broader concept that does not necessarily imply misconduct or a direct effect on scientific integrity [ 13 ]. This diversity may explain the disparities between journals, publishers, research institutions, funders, and researchers when taking measures to prevent and report misconduct or breaches to research integrity. This scenario represents a challenge for academic studies on the matter.

In fact, for this review, the traditional bibliographic sources did not provide a complete picture of retracted articles. A total of eight (15%) articles were only identified on the Retraction Watch website, highlighting difficulties in retrieving retractions and suggesting poor transparency in the reporting of retractions.

Another obstacle of research transparency is the diversity of journal policies to deal with this subject [ 6 ], in that they do not always follow the COPE recommendation for the publication of retraction notices. For instance, the use of footnotes or comments from readers as an alert of a retraction [ 25 ], [ 26 ] and the absence of any type of warning in the database or in the article that is available in the journal. In addition, this review identified an erratum that was actually a retraction notice. These results reflect that some journal policies disregard research integrity flaws.

Legal threats to publishers have an influence on their positions regarding misconduct and, therefore, on the issue of retractions [ 7 ]. Despite publishers concern over litigation, this review found complete information, transparency and clarity of other retraction notices, supporting the existence of disparities between editors’ and publishers’ attitudes towards handling errors or misconduct.

The fact that public institutions funded the majority of the retracted articles also raises concerns regarding the importance of coordinated action between institutions to prevent research misconduct and to allocate a responsible investment of public funds.

In 2013, a Brazilian citation-stacking scheme used to increase journal impact factor was revealed [ 24 ]. Thompson and Reuters discovered that four journals were participating in self-citation in order to boost their impact factor [ 27 ]. Despite of the considerable number of retractions that were made as a result of this scheme, this review search strategy was able to identify a unique paper that was retracted for an irregular citation pattern [ 28 ], which is known as citation stacking. This fact addresses once more the difficulties in finding retracted articles [ 29 ], [ 30 ] and, therefore, warrants the necessity of efforts to maintain transparency in every step of scientific assembly.

Previous studies have shown that fraud and error have accounted for most of the retractions of biomedical articles [ 4 ], [ 28 ]; however, the present review revealed a larger number of retractions due to plagiarism. Fraud refers to fabrication, falsification or manipulation of data while error implies no intention to compromise the study [ 13 ]. Plagiarism may refer to unjust appropriation of ideas (text plagiarism) or images (image plagiarism). This review showed that 76% of the reported plagiarism was accounted for by image plagiarism. Among the cases of image plagiarism, 15% of the retractions clearly stated the existence of similarities of images to previous publications and raised manipulation concerns. In addition, 33.3% of the retractions due to plagiarism did not specify the type of plagiarism.

In regard to image editing, there is a fine line between what is allowed and what is not, and there are no standardized guidelines of scientific journals [ 13 ], [ 31 ]. Coordinated action is needed in order to establish guidelines and education for authors regarding image editing and the rationale for what is considered misconduct [ 32 ].

The underlying factors to explain why image plagiarism is the major cause of misconduct are unclear. Nevertheless, the notable increase in retractions is an indicator of the awareness of scientific misconduct [ 33 ] in regard to different forms of plagiarism and the necessity of actions to avoid this behavior.

Are the increasing numbers of retracted publications a sign of scientific awareness of misconduct?

The results of this review are in accordance with those of previous studies about chronological trends of retracted publications [ 33 ], [ 34 ] that showed an increasing number of retractions in the past years. It is not possible to affirm that misconduct is increasing by evaluating only the retractions of authors affiliated with Brazilian institutions. Deeper investigation is needed to evaluate this aspect.

The increasing number of retracted publications over the years may be a sign of scientific awareness and response of authors, readers and institutions to flag questionable research [ 33 ], [ 34 ]. This can be illustrated by the request of authors to withdraw their article or the alert from other researchers to editors. In addition, more retractions are a reflex of advances in technology that can identify plagiarism and data manipulation [ 33 ], [ 34 ]. For instance, the use of software to identify image manipulation and plagiarism may increase the detection of such misconduct. Likewise, with a faster publication process, the publication of retractions and investigations–when needed–can be more efficient with the participation and collaboration of authors, institutions, researcher, and journals.

What is the purpose of a retraction if not to be used to avoid more scientific misconduct?

A recent publication explored the nature of retracted articles [ 9 ]. The authors classified the citations as positive, neutral or negative. An interesting aspect of this study was the evaluation of a proper citation method for retracted articles. Otherwise, a retracted article is cited as legitimate and, hence, reliable. In most cases, it is not possible to assess whether a retracted article served as a basis for a new scientific investigation despite its retraction or whether it was cited without careful attention. Our finding regarding post retraction citation patterns showed how often retracted articles continue to receive positive citations without accurate retraction identification.

Further investigation is needed to understand why unreliable studies are still cited as legitimate [ 35 ]. Nevertheless, it is important to address that retracted publications might be used for new scientific production. A proper citation of retracted publications brings awareness to the causes involving its withdrawal and assists authors in not ignoring the retraction. Proper citation gives researchers the tools to make decisions in accordance with obvious ethical implications.

The role of distinct actors in the publication of retractions

Retractions are published at the request of an author, publisher, editor, or community [ 4 ], [ 7 ], [ 8 ], [ 9 ]. The intention of a retraction is to promote transparency and clarity regarding research misconduct or an honest error that lead to flawed articles [ 4 ], [ 6 ], [ 7 ]. Thus, in accordance with the COPE Guidelines for Retractions , retractions should be published as soon as possible to avoid new citations of the unreliable work, researchers acting on its findings, or drawing more erroneous conclusions. Because the main goal is to minimize a chain of flaws, retractions should be transparent regarding the reason for the retraction, existence of endorsement by the authors, the date of retraction, a reference to the retracted article, a DOI, attachment to the original article and visibility [ 7 ], [ 36 ].

This review encompassed a wide range of retraction policies of different journals from the retraction wording to how the article is red-flagged [ 6 ], [ 7 ]. For wording, the reasons for the retraction were sometimes vague or absent. Information regarding retraction date and citation of the retracted article were also nonexistent for some publications. For methods to signal a retraction to readers, a variation from a big red note of withdrawn/retracted ( red-flag ) to a simple footnote was found. A possible explanation for the difficulties in retrieving articles for this review was the lack of a standardized publication of retraction notices. Furthermore, these practices are completely against the purpose of publishing retractions: transparency.

Endeavors to promote transparency are a caveat of unethical practices involving those involved in the scientific activity: scientists, publishers, editors, and academic institutions [ 18 ], [ 35 ], [ 36 ]; each has a specific role and may contribute to minimizing misconduct or not. Everybody has a role.

Limitations and strengths

Incomplete information of the retraction notices reduced the accuracy of our analysis. Hence, the results obtained may underestimate the number of retractions due to restrictions of our search strategy, the level of transparency of the published retractions and their availability in the bibliographic databases.

Additionally, our analysis did not include an assessment of the original paper’s quality, and therefore, it is not possible to draw conclusions regarding the relationship between the research quality and retraction. Further investigations should be performed with this purpose since it is known that a retraction does not necessarily indicate a completely invalid study [ 1 ].

Since research integrity is a worldwide concern, despite the fact that this review considered only Brazilian institutions, its findings provide useful insights and could serve as a basis for future investigations.

Retraction notices do not account only for research misconduct; they are also an alert of honest mistakes during scientific practices [ 6 ]. Nevertheless, these incidents compromise the quality and validity of research results. Considering authors affiliated with Brazilian institutions, this review concluded that most of the retractions of articles in health and life sciences were retracted for research misconduct.

Journals, funders, academic institutions, and researchers have an important educational and surveillance role to play in preventing research misconduct. The enforcement of disciplinary and educational measures is fundamental to reduce the incidence of corrupted science. In addition, the creation of a standard instrument for reporting retraction notices would assure the discussion of ethical policies and would promote a uniform publication of retractions.

This study attempted to emphasize the importance of coordinated action among all involved in scientific production in order promote research transparency. There is a positive impact of good practices when conducting investigations and reporting and publishing retraction notices. The underlying factors involving research misconduct remains unclear. Measures to prevent misconduct may take into consideration the particularities of each society, including weakness and strengths, depending on the cultural aspects. However, the impact of bad science is borderless and is not culture-dependent.

Supporting information

S1 table. prisma checklist..

https://doi.org/10.1371/journal.pone.0214272.s001

S2 Table. Study data.

https://doi.org/10.1371/journal.pone.0214272.s002

S1 File. Statistical analysis pipelines and rationale.

https://doi.org/10.1371/journal.pone.0214272.s003

Acknowledgments

We would like to thank the editors, publishers, institutions and authors who contributed to a clear and transparent retraction notice. Without your integrity this review would not be possible.

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 2. Smith R. What is research misconduct? The COPE Report 2000: the Committee on Publication Ethics. BMJ Books. 2000. https://publicationethics.org/files/u7141/COPE2000pdfcomplete.pdf
  • 11. Neimark J. Life of attack. Christopher Korch is adding up the costs of contaminated cell lines. Life Magazine 2015; [cited 2017 nov] 347(issue 6225). Available from: https://www.jillneimark.com/pdf/line-of-attack.pdf
  • 13. Shaw and Satalkar’s schema for research integrity: https://wcrif.org/images/2017/documents/3.%20Wednesday%20May%2031,%202017/4.%202A-00/D.%20Shaw%20-%20Interpreting%20integrity;%20A%20conceptual.pdf
  • 15. Cross D, Thomson S, Sinclair A. A report for CAPES. Research in Brazil. Research Clarivate. A Report for CAPES. Research Clarivate. Research in Brazil, 2018. https://www.capes.gov.br/images/stories/download/diversos/17012018CAPES-InCitesReport-Final.pdf
  • 18. Inter Academy Council (IAP). Responsible conduct in the global research enterprise: a policy report. The Netherlands: IAP. 2012. http://www.interacademies.net/file.aspx?id=19789
  • 19. Brasil. Tabela de Áreas do Conhecimento. Conselho Nacional de Desenvolvimento Científico e Tecnológico. Ministério da Ciência, Tecnologia, Inovações e Comunicações. 2017. http://www.cnpq.br/documents/10157/186158/TabeladeAreasdoConhecimento.pdf
  • 20. Oransky I. Tracking retractions as a window into the scientific process. Retraction Watch. 2018. https://retractionwatch.com
  • 21. Venkatraman V. 2010. Conventions of scientific authorship. Science Retrieved 30 November, 2017. https://doi.org/10.1126/science.caredit.a1000039
  • 23. GEOCAPES. Sistema de dados estatísticos da CAPES. Distribuição de programas de pós-graduação no Brasil em 2017. Brasil, 2018; [cited 2018 dec]. https://geocapes.capes.gov.br/geocapes/

Featured Clinical Reviews

  • Screening for Atrial Fibrillation: US Preventive Services Task Force Recommendation Statement JAMA Recommendation Statement January 25, 2022
  • Evaluating the Patient With a Pulmonary Nodule: A Review JAMA Review January 18, 2022

Select Your Interests

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing
  • Download PDF
  • Share X Facebook Email LinkedIn
  • Permissions

Scientific Misconduct and Medical Journals

  • 1 JAMA and the JAMA Network, Chicago, Illinois

According to the US Department of Health and Human Services, “Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.” 1 Other important irregularities involving the biomedical research process include, but are not limited to, ethical issues (eg, failure to obtain informed consent, failure to obtain appropriate approval from an institutional review board, and mistreatment of research participants), issues involving authorship responsibilities and disputes, duplicate publication, and failure to report conflicts of interest. When authors are found to have been involved with research misconduct or other serious irregularities involving articles that have been published in scientific journals, editors have a responsibility to ensure the accuracy and integrity of the scientific record. 2 , 3

Although not much is known about the prevalence of scientific misconduct, several studies with limited methods have estimated that the prevalence of scientists who have been involved in scientific misconduct ranges from 1% to 2%. 4 - 6 During the last 5 years, JAMA and the JAMA Network journals have published 12 notices of Retraction about 15 articles (including recent Retractions of 6 articles by the same author) 7 and 6 notices of Expression of Concern about 9 articles. These notices were published primarily because the original studies were found to involve fabrication or falsification of data that invalidated the research and the published articles; in some cases, postpublication investigations could not provide evidence that the original research was valid. Since 2015, JAMA and the JAMA Network journals also have retracted and replaced 12 articles for instances of inadvertent pervasive error resulting from incorrect data coding or incorrect analyses and without evidence of research misconduct. 8 During the same period, 1021 correction notices have been published in these journals. The JAMA Network policies regarding corrections and retraction with replacement have been published previously. 8 , 9 In this Editorial, the focus is on a more complex and challenging issue—scientific misconduct involving fabrication, falsification, and plagiarism in the reporting of research. 1

The Role and Responsibilities of Editors

JAMA and the JAMA Network journals receive numerous communications from readers, such as letters to the editor and emails, that are critical of the published content. Most of the critiques involve matters of interpretation, the need for clarification of content, or differences of opinion; some address ethical concerns, some are frivolous complaints, and some include calls for retraction. However, typically 10 to 12 times each year these journals receive allegations of scientific misconduct. All matters related to allegations of scientific misconduct for articles published in JAMA and the JAMA Network journals are evaluated and managed by the senior staff of JAMA including the editor in chief of JAMA , executive editor, executive managing editor, and the editorial counsel. This provides a consistent process for dealing with potential scientific misconduct. If the allegation involves an article published in a network journal, the editor in chief of that journal is involved and kept informed about the progress of the investigation. In addition, when necessary, additional expertise is obtained.

Allegations of scientific misconduct brought to journals are challenging and time-consuming for the authors, for editors, and potentially for institutions. The first step involves determining the validity of the allegation and an assessment of whether the allegation is consistent with the definition of research misconduct. In some cases, when authors are accused of misconduct, the criticism represents a different interpretation of the data or disagreement with the statistical approach used, rather than scientific misconduct. This initial step also involves determining whether the individuals alleging misconduct have relevant conflicts of interest. In some cases, it appears that financial interests and strongly held views (intellectual conflict of interest) may have led to the allegation. This does not mean that potential conflicts of interest on the part of the persons bringing the allegations preclude the possibility of scientific misconduct on the part of the authors, but rather, evaluation of conflict of interest is part of the assessment process.

If scientific misconduct or the presence of other substantial research irregularities is a possibility, the allegations are shared with the corresponding author, who, on behalf of all of the coauthors, is requested to provide a detailed response. Depending on the nature of the allegation, it can take months for some authors to respond to the concerns. After the response is received and evaluated, additional review and involvement of experts (such as statistical reviewers) may be obtained. In the majority of cases, the authors’ responses and additional information provided regarding the concerns raised are sufficient to make a determination of whether the allegations raised are likely to represent misconduct. For cases in which it is unlikely that misconduct has occurred, clarifications, additional analyses, or both, published as letters to the editor, and often including a correction notice and correction to the published article are sufficient. To date, JAMA has had very few disagreements with individuals making allegations of scientific misconduct, although some have been critical of the time it has taken for JAMA and other journals to resolve an issue of alleged scientific misconduct. 10 - 12

However, if the authors’ responses to the allegations raised are unsatisfactory or unconvincing, or if there is any doubt as to whether scientific misconduct has occurred, additional information and investigation are usually necessary, and the appropriate institution is contacted with a request to conduct a formal evaluation. At that time, and depending on the nature of the allegations, the journal may publish a notice of Expression of Concern about the published reports in question, indicating that issues of validity or other concerns have arisen and are under investigation. 2

Involving institutions is done with great care for several reasons. First, even just an allegation of misconduct can harm the reputation of an individual. Individuals involved in such allegations have expressed this concern and notification of an institution increases the level of scrutiny directed toward the involved person. In these cases, institutions are responsible for ensuring appropriate due process and confidentiality, based on their policies and procedures. Second, just as JAMA receives allegations of scientific misconduct and research irregularities, so too do institutions. It simply is not possible for every institution to conduct a detailed investigation of every allegation received; thus, JAMA and the JAMA Network journals ensure that institutions are only asked to be involved after a determination has been made that scientific misconduct is a possibility and for which the authors have not adequately responded to the concerns raised.

The Role and Responsibilities of Institutions

Institutions are expected to conduct an appropriate and thorough investigation of allegations of scientific misconduct. Some institutions are immediately responsive, acknowledging receipt of the letter from the journal describing the concerns, and quickly begin an investigation. In other cases, it may take time to identify the appropriate institutional individuals to contact, and even then, many months to receive a response. Some institutions appear well-equipped to conduct investigations, whereas other institutions appear to have little experience in such matters or fail to conduct adequate investigations 13 ; these institutions can take months to years to provide JAMA with an adequate response. In some cases involving questions of misconduct from outside of the United States, institutions have indicated that further investigation must wait until numerous legal issues are resolved, further delaying a response.

The type of investigation an institution conducts depends on the specific allegations and the institutional policies and procedures. In some cases, the investigation has involved reviewing the data, the article and related articles, and the analysis. In other cases, the investigation has involved reanalysis by the authors, or independent statistical analysis by a third party not involved in the initial study. Other cases have involved investigation of ethical issues related to the research, such as appropriate ethical review and approval of the study, informed consent for study participants, and notification of study participants about information related to risks of an intervention. No single approach is appropriate in all cases, but rather it depends on the specific allegation. In 2017, a group of representatives who deal with scientific misconduct, including university and institutional leaders and research integrity officers, federal officials, researchers, journal editors, journalists, and attorneys representing respondents, whistle-blowers, and institutions, examined best and failed practices related to institutional investigation of scientific misconduct. 14 The group developed a checklist that can be used by institutions to follow reasonable standards to investigate an allegation of scientific misconduct and to provide an appropriate and complete report following the investigation. 14

JAMA editors request institutions to provide periodic updates on the status of an investigation, and once the investigation is completed, institutions are expected to provide the editors with a detailed report of their findings. For cases in which misconduct has been identified, the institution and the authors may recommend and request retraction of the published article. In other cases, based on the report of the investigation from the institution, the journal editors make the determination of what actions are needed, such as whether an article should be retracted; or when a notice of Expression of Concern had been posted, whether it should be subsequently followed by a notice of Retraction. In each case, the notices are linked to and from the original article, and retracted articles are clearly watermarked as retracted so that readers and researchers are properly alerted to the invalid nature of the original articles. 2

Conclusions

Allegations of scientific misconduct are challenging. Not all such allegations warrant investigation, but some require extensive evaluation. JAMA reviews its approach to allegations of scientific misconduct on a regular basis to ensure that the process is timely, objective, and fair to authors and their institutions, and results in evidence that will directly address the allegations of misconduct. Ultimately, authors, journals, and institutions have an important obligation to ensure the accuracy of the scientific record. By responding appropriately to concerns about scientific misconduct, and taking necessary actions based on evaluation of these concerns, such as corrections, retractions with replacement, notices of Expression of Concern, and Retractions, JAMA and the JAMA Network journals will continue to fulfill the responsibilities of ensuring the validity and integrity of the scientific record.

Corresponding Author: Howard Bauchner, MD, Editor in Chief, JAMA , 330 N Wabash Ave, Chicago, IL 60611 ( [email protected] ).

Published Online: October 19, 2018. doi:10.1001/jama.2018.14350

Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Ms Flanagin reports serving as an unpaid member of the board of STM: International Association of Scientific, Technical, and Medical Publishers. No other disclosures were reported.

See More About

Bauchner H , Fontanarosa PB , Flanagin A , Thornton J. Scientific Misconduct and Medical Journals. JAMA. 2018;320(19):1985–1987. doi:10.1001/jama.2018.14350

Manage citations:

© 2024

Artificial Intelligence Resource Center

Cardiology in JAMA : Read the Latest

Browse and subscribe to JAMA Network podcasts!

Others Also Liked

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

On the Willingness to Report and the Consequences of Reporting Research Misconduct: The Role of Power Relations

  • Original Research/Scholarship
  • Open access
  • Published: 26 February 2020
  • Volume 26 , pages 1595–1623, ( 2020 )

Cite this article

You have full access to this open access article

article on research misconduct

  • Serge P. J. M. Horbach 1 , 2   na1 ,
  • Eric Breit   ORCID: orcid.org/0000-0001-5069-7406 3   na1 ,
  • Willem Halffman   ORCID: orcid.org/0000-0002-1800-5884 1 &
  • Svenn-Erik Mamelund   ORCID: orcid.org/0000-0002-3980-3818 3  

11k Accesses

15 Citations

69 Altmetric

Explore all metrics

While attention to research integrity has been growing over the past decades, the processes of signalling and denouncing cases of research misconduct remain largely unstudied. In this article, we develop a theoretically and empirically informed understanding of the causes and consequences of reporting research misconduct in terms of power relations. We study the reporting process based on a multinational survey at eight European universities (N = 1126). Using qualitative data that witnesses of research misconduct or of questionable research practices provided, we aim to examine actors’ rationales for reporting and not reporting misconduct, how they report it and the perceived consequences of reporting. In particular we study how research seniority, the temporality of work appointments, and gender could impact the likelihood of cases being reported and of reporting leading to constructive organisational changes. Our findings suggest that these aspects of power relations play a role in the reporting of research misconduct. Our analysis contributes to a better understanding of research misconduct in an academic context. Specifically, we elucidate the processes that affect researchers’ ability and willingness to report research misconduct, and the likelihood of universities taking action. Based on our findings, we outline specific propositions that future research can test as well as provide recommendations for policy improvement.

Similar content being viewed by others

article on research misconduct

Literature reviews as independent studies: guidelines for academic practice

article on research misconduct

Authorship conflicts in academia: an international cross-discipline survey

article on research misconduct

Navigating the Science System: Research Integrity and Academic Survival Strategies

Avoid common mistakes on your manuscript.

Introduction

In recent years, the attention paid to research integrity and misconduct has increased. Besides attempting to measure or estimate the extent of misconduct, several scholars also have investigated its causes. This literature shows that important drivers of misconduct range from individual personality traits to systemic factors, which include productivity pressure and corporate influences (Fanelli et al. 2015 ; Tijdink et al. 2016 ; Horbach and Halffman 2019 ). Much less attention has been paid to how scientific misconduct is detected and denounced, for instance, through the peer review system (Guston 2007 ; LaFollette 1992 ) or social control mechanisms such as whistleblowing (Stroebe et al. 2012 ). These processes are crucial for signalling misconduct and articulating what the research community deems un/acceptable behaviour. In addition, the detection and sanctioning of research misconduct depend almost entirely on discovery and reporting by peers, with the potential exception of plagiarism and some forms of statistical manipulation, which may be discovered through automated detection by means of ‘scanners’.

In this paper, we aim to provide a more elaborate theoretical and empirical understanding of the causes and consequences of reporting research misconduct. We do so by approaching the issue from a whistleblowing perspective (Near and Miceli 2016 ; Santoro and Kumar 2018 ; Vandekerckhove 2016) and by applying theories of power and power differences. The term ‘whistleblowing’ typically refers to distinct activities of informing authorities or the public that the organization one is working for is doing something immoral or illegal. In our use of the term, we include also ‘softer’ forms of reporting, such as talking to colleagues or one’s supervisor. To avoid confusion, we use the term ‘reporting’ and only use ‘whistleblowing’ in reference to literature explicitly using this term.

A better understanding of research misconduct reporting could contribute to improved early warnings and more effective preventive policies to promote research integrity. Rather than appealing to individuals to take responsibility and relying on sanctions to keep them in line, such policies should pay more attention to social processes, such as power imbalances, group pressure and performance pressure. More specifically, appropriate reporting policies should target a culture of complacency and cynicism that normalises questionable research practices, or even outright misconduct (Clair 2015 ; Martinson et al. 2010 ). The literature on organisational integrity in general has focussed on power imbalances, retribution concerns and career consequences (Bowie 2010 ; Palazzo 2007 ), which probably also play a role in research integrity.

In this study, we aim to better understand the processes that facilitate or inhibit the reporting of alleged research misconduct by analysing a large sample of direct and indirect witnesses of research misbehaviour. We draw on qualitative responses from a survey conducted in eight European academic research universities in 2017. In this survey, respondents who indicated they had directly or indirectly witnessed an instance of misconduct were asked to respond to open-ended questions about this instance and how they handled it.

We focus on the following research question: How do varying power positions influence the reporting, or not reporting, of alleged research misconduct? Given the data available to us, we focus on three specific power positions: academic seniority (i.e. the formal work position), work contracts (i.e. permanent vs. temporary appointments) and gender. These power elements have been identified as key factors in most commercial organisations’ studies on organisational integrity (Dozier and Miceli 1985 ; Cassematis and Wortley 2013 ; Culiberg and Mihelic 2017 ). We also study the influence of the specific type of misconduct, i.e. whether it involves a clear-cut type, such as plagiarism, or a more contested ‘questionable research practice’ (QRP), such as the disputed attribution of authorship.

To our knowledge, this is the first systematic study of researchers’ reasons for and accounts of reporting or not reporting witnessed misconduct. Our aim is to extend studies of ‘whistleblowing’ and of misconduct reporting from a predominantly corporate setting to that of academia. Our article is structured as follows: the “ Literature ” section presents an overview of the literature on research integrity and misconduct. In the “ Theoretical Framework ” section, we introduce studies of reporting and power, deriving factors potentially affecting researchers’ willingness to and the consequences of reporting alleged misconduct. The “ Methods ” section describes our study’s survey methodology, while  the “ Results ” section presents this survey’s main qualitative empirical results. In the “ Discussion ” section, these findings are formulated as propositions relating power relations to the reporting of alleged misconduct. We suggest these propositions can be used to explore further hypothesis-testing research on the relationship between power differences and misconduct reporting. Finally, the “ Conclusion and Recommendations ” section offers concluding remarks and policy recommendations.

Over the past decades, research misconduct has drawn the attention of scholars from various fields. Currently, an extensive literature focusses by and large on the prevalence and causes of misconduct, or on questionable research practices (QRP). The literature focuses to a lesser extent on the consequences of misconduct, for example, how retracted journal articles affect careers (Azoulay et al. 2017 ), how institutions deal with alleged misconduct cases (Horbach et al. 2018 ), the consequences for research reliability (Horbach and Halffman 2017 ; Al-Marzouki et al. 2005 ), and on the role of scientific misconduct in general (Schulz et al. 2016 ).

This literature has also developed an inventory of ‘novel’ forms of academic misbehaviour (Callaway 2015 ; Sacco et al. 2018 ; Biagioli et al. 2019 ; Bouter et al. 2016 ), including estimations of how often some of these forms occur (e.g. Hopp and Hoover 2017 ; Fanelli 2009 ). Several authors highlight ‘risk categories’, including scientific fields and geographical areas where research misconduct or QRPs occur more frequently (e.g. Fanelli et al. 2015 ; Yang 2013 ; Stitzel et al. 2018 ). Similarly, the literature has outlined several potential causes of misbehaviour in science, including individual researchers’ personality traits (Tijdink et al. 2016 ); the organisational context in which these researchers operate (Anderson et al. 2007 ; Forsberg et al. 2018 ); and more systemic causes, such as competitive research funding and ‘publish or perish’ pressures (Fanelli et al. 2017 ; Sarewitz 2016 ).

Much less attention has been paid to the mechanisms that might detect and identify scientific misconduct. Some have argued that the peer review system is a prime example of such a mechanism (Guston 2007 ; LaFollette 1992 ), others’ hopes rest on social control mechanisms, most notably whistle-blowers and the close colleagues of misbehaving scientists (Stroebe et al. 2012 ). Despite the limited evidence, misconduct case studies suggest that alleged culprits’ close colleagues and peers are the most likely way of bringing misconduct to light (Horbach et al. 2018 ).

Specifically, the processes involved in signalling and reporting alleged misconduct are not well researched. For example, we are not aware of any study examining researchers’ motivations for reporting alleged academic misconduct, or of any research on such actions’ effectiveness. However, some studies have made a case for establishing ‘safe whistleblowing procedures in academic organisations’ (Forsberg et al. 2018 ). There is also a significant literature on whistleblowing procedures in the business ethics and management fields (e.g. Culiberg and Mihelic 2017 ; Palazzo 2007 ; Vandekerckhove 2016 ). Nevertheless, little is known about how these processes are applied in academic research institutions such as universities. Questions about who are most likely to report, their motivations for doing so, and the effectiveness and potential negative consequences of reporting remain unanswered. A deeper understanding of how and why researchers raise concerns or keep quiet can contribute to developing organisational conditions that will support a stronger culture of research integrity.

Theoretical Framework

In addition to the literature on research integrity, there is a vast body of research on integrity in organisations in general, based on studies of wrongdoing in (or by) organisations (e.g. Palmer 2012 ), and research on organisational integrity management (e.g. Paine 1994 ). The literature on organisational integrity has paid more explicit attention to whistleblowing and the reporting of misbehaviour (Vandekerckhove 2016 ; Near and Miceli 2016 ). Among the more central theoretical questions in the literature on whistleblowing are the factors influencing (a) a witness of wrongdoing’s decision whether or not to report such instances; (b) the extent to which a reporter faces, or fears, retaliation; and (c) reporting’s effectiveness in terms of addressing wrongdoing. Besides the likelihood and effectiveness of reporting, the literature has also highlighted whistleblowing’s potentially negative consequences, such as psychosocial and reputational consequences (Bjørkelo and Matthiesen 2012 ; Culiberg and Mihelic 2017 ; Park and Lewis 2018 ).

The role of power relations is widely acknowledged in respect of whistleblowing or the reporting of wrongdoing in organisations. We therefore highlight two central conceptualisations of power in this literature. As theorised in the resource dependence theory, the first notion understands power as a central resource or asset that an individual may possess (Lukes 2005 ). From this perspective, an organisation’s less powerful members—such as younger employees, people with temporary work contracts, women, or people lower in the organisation’s hierarchy—are less likely to report alleged misconduct.

Several factors contribute to this decreased reporting likelihood. For example, these actors may have less access to powerful social networks in the organisation and therefore have less social capital. In addition, younger, and thus less experienced employees, may have less knowledge of the procedures and of how these are applied in practice. Low-resource members may also fear more retaliation and are generally less able to achieve genuine and desirable change as a result of reporting a case, such as adequate intervention in wrongdoing cases, or even improved integrity policies (Gao et al. 2015 ). This is especially true in cases involving more powerful wrongdoers.

A second theoretical dimension is French and Raven’s theory of social power, i.e. having the ability or being in a position to influence others. The theory involves five power bases: legitimate power (based on the legitimate right to prescribe behaviour), referent power (based on identification with one another), expert power (based on special knowledge or expertise), reward power (based on the ability to award resources), or coercive power (based on threats of punishment) (French et al. 1959 ). This suggests that reporters of alleged misconduct lacking such power bases are less likely to be effective, especially when reporting the misbehaviour of more powerful organisation members. This lack of power may affect the likelihood that they will report misconduct and the outcome of their reporting.

Based on the above, we expect researchers with fewer resources to be less likely to report alleged misconduct and their reporting to be less likely to result in effective interventions (Mesmer-Magnus and Viswesvaran 2005 ; Gao et al. 2015 ). Such researchers may include those in junior positions, such as doctoral students and post-docs, as well as those with temporary contracts. In addition, social power theory indicates that researchers with higher seniority (i.e. who have worked in academia longer) are more likely to (effectively) report misconduct cases (Cassematis and Wortley 2013 ; Gao et al. 2015 ).

Consequently, even though the role of power in whistleblowing has not been studied in an academic context, the variables (1) academic seniority and (2) temporal versus permanent work appointments are expected to affect the willingness to and the consequences of reporting research misconduct. In addition, some of the obstacles to and the potential consequences of reporting are believed to affect women more, although the evidence for this from the general integrity literature is inconclusive (Mesmer-Magnus and Viswesvaran 2005 ). To shed more light on the topic, gender will be explored as a third variable potentially influencing reporting.

Lastly, several studies have outlined the influence of the type of wrongdoing on the likelihood of reporting. In particular, clear-cut instances of misbehaviour are more likely to be reported than nuanced cases, which may be prone to different interpretations and normative assessments (Near and Miceli 1985 ; Mesmer-Magnus and Viswesvaran 2005 ). Accordingly, more indisputable forms of research misconduct, such as fabrication, falsification and plagiarism (FFP), are more likely to be reported than more questionable research practices, such as disputes over authorship or text recycling. Furthermore, if witnesses of wrongdoing perceive that there is high probability of their complaint being taken seriously and it is less likely to backfire, they are more likely to report it. We also study the type of witnessed misconduct’s influence as an additional variable in our model.

Data Collection

Data on research misconduct as a workplace issue were collected by means of a web-based, cross-sectional survey (Questback). The questionnaire is included as supplementary material. Central themes in this survey were organisational policies regarding misconduct and integrity, reporting mechanisms and attitudes, tensions arising from and the risks of research misconduct, perceptions of integrity measures, and the prevalence of research misconduct (Mamelund et al. 2018 ).

In this paper, we draw on data collected as part of the survey, which consisted of open-ended questions on the respondents’ possible first-hand knowledge of a research misconduct incident. This approach was adopted from the validated and revised Scientific Misconduct Questionnaire (SMQ-R) (Habermann et al. 2010 ; Broome et al. 2005 ). The open-ended questions were:

How did you first learn about the instance of research misconduct?

Please describe the specific instance of research misconduct.

What did you do when you became aware of it?

Whom (titles only) did you talk to?

Were you able to talk to the individuals who were involved?

Was the instance reported? To whom and by whom?

What was the outcome? How did you feel about how it was handled?

Did you think anything changed as a result?

Is there anything you would have done differently?

Participant Selection

The survey was conducted among the employees of the European PRINTEGER project’s eight partner universities (PRINTEGER 2016 ). A PRINTEGER member sent a link to the survey questionnaire to the principal investigators in each of the partner universities. These investigators subsequently forwarded information about the survey and the link to a senior manager at their institution, who distributed these to the target population at their universities, i.e. the academic staff, excluding the technical and administrative staff. The reason for this distributed approach was to ensure a high response rate. The senior managers were encouraged to inform their academic staff of the survey and to highlight its importance.

Data collection took place from 7 March to 1 August 2017. The total population across the eight partner institutes comprises 20,815 academic staff members. Overall, 1126 respondents participated in the survey, with 194 responding to the open-ended questions. The demographical characteristics of the open-ended questions’ and the survey’s respondents are provided in Table 1 . Table 1 provides information on the potential reporting bias by indicating the differences between the qualitative sample’s respondents and the participating universities’ general population. As shown in the table, the qualitative sample consists of older researchers and those with more senior positions (i.e. professors and associate professors) than found in the general population. The qualitative sample also consists of a higher proportion of social and behavioural sciences’ researchers.

Each of the eight universities provided population data. Two challenges arose from the population analyses. Firstly, the coding system across the universities revealed inconsistencies regarding academic positions, especially regarding the meaning of ‘teacher’ and ‘academic field’. Where available, we used individual-level data and discarded the faculty information. Secondly, not all universities had access to staff members to cover all the demographic variables. For example, one of the universities could not reach its Ph.D. students.

Privacy and Ethics Approval

The relevant ethics committees at each participating institution granted their ethics approval of the survey. The privacy policy was explained before the participants started the survey and they were asked to agree to this, thus providing informed consent. The responses were collected anonymously and are not traceable to the respondents’ institutes. We present quotes from the responses with the demographic information that the relevant respondent provided.

Data Analysis

The analysis used for this paper is explorative and inductive, i.e. we sought to develop hypotheses rather than to test them (Silverman 2016 ). Owing to the relatively small sample size and the diversity of the universities studied, as well as the richness of the qualitative responses to the open-ended questions, we refrain from an in-depth statistical analysis, focussing instead on the content of the open-ended questions’ responses. We thus explore the issues raised in the responses and develop propositions that future research can test. The relatively high number of qualitative responses allowed us to examine the different responses’ frequency to demonstrate more and less common responses, but not to estimate precise rates in terms of the population. Consequently, we do not provide significance levels and do not claim that these proportions can be generalised.

Second, we collapsed the nine initial questions (see above) into the following five categories due to the overlap between some of the questions: (1) type of research misconduct witnessed, (2) source that led to awareness of the misconduct, (3) initial reaction to the awareness, (4) type of reporting and (5) outcome of the reporting. Broome et al. ( 2005 ) took a similar approach, although the final categories are not entirely similar.

Third, we systematized the responses within each of the five categories by searching for patterns between the responses. We took an open coding approach, which meant that we assigned our own labels to the responses, rather than using a pre-existing template (Strauss and Corbin 1990 ). An exception was made in the ‘type of misconduct witnessed’ category, in which we used existing categorizations of misconduct, such as falsification, fabrication and plagiarism. We thus categorised responses into codes. At times, responses were categorized into multiple codes; consequently, the frequency of the codes may be larger than the overall number of respondents. This process of categorizing codes was iterative, i.e., some codes were collapsed and new ones developed as the analysis proceeded. “Appendix” provides an overview of the coding and the responses, including illustrative examples.

Fourth, once we had developed an appropriate selection of categories of the responses, we moved on to analyse the relationships between the responses and the demographical variables. We performed this analysis using simple cross-tables, i.e. based on counting the code frequency related to the different demographic variables. Given our theoretical interest in power relations, we focused especially on gender, age, academic seniority and work appointment. We also included type of misconduct in the analysis, distinguishing between clear-cut types of research misconduct and more nuanced forms of misconduct.

The results of these analyses are provided in the following sections. We present both results of the qualitative and the quantitative aspects, but the latter mainly serve as a contextualisation of the qualitative material forming the core of our results.

In this section, we will first (“ Effect of Power Relations on Reporting ” section) look more closely at the distribution of the responses with respect to the three variables outlined in Sect.  3 : seniority, work contracts and gender. For each variable, we first present a brief overview of the quantitative distribution of responses related to respondents’ demographic information, after which we present and analyse the responses more qualitatively. We analyse the responses in terms of each variable in three ways: (1) whether or not a case was reported, (2) how and to whom a case was reported, and (3) the respondent’s perception of the consequences of reporting. We then (“ Type Of Misconduct Reported ” section) explore in detail how the respondents’ reporting varies across the different types of misconduct witnessed. These findings suggest how elements of power are involved in misconduct and its potential reporting, which lead to several propositions on power relations’ role in the reporting of alleged misconduct in “ Discussion ” section.

Effect of Power Relations on Reporting

Academic seniority.

The existing literature on reporting suggests that researchers in junior or lower academic positions are less likely to report alleged misconduct compared to those in more senior positions. In our data, there is indeed a division between professors and researchers in lower positions. In our limited sample, professors reported a witnessed case of alleged misconduct more often (67% reported vs 29% not reported) than other members of academia, such as associate professors (37% vs 53%), post docs (35% vs 61%) and Ph.D. students or TAs (39% vs 51%). Since we defined reporting as giving account of a case to any party tasked with handling such cases, these observations hold even when taking into consideration that some junior researchers would report misconduct to senior researchers, who would then report it to official channels, such as a research integrity committee.

When we examine the responses across age groups, more senior researchers are also more likely to report misconduct. While respondents in the age group 20–29 claim to have reported misconduct in 33% of the witnessed instances, the other age groups’ percentages are 32% (30–39), 51% (40–49), 65% (50–59), and 51% (60 +). In this small set, the effect of age and academic rank could not be isolated, but it does suggest that age should currently not be ignored as a significant factor.

The qualitative responses provide more insight into the reasons for reporting or not. Notions of power, in this case in the sense of resource availability, seem to play a crucial role. A prominent reason for junior researchers not to report was their fear of negative consequences, such as losing future opportunities or the hampering of their social relations at work. This is exemplified in the following quotes:

No, I did not push it further for fear of career consequences (30-39, female, Law/Arts/Humanities, PhD student, temporary, 0-5 years). I reported to the direct supervisor, but was sure I could not go beyond that, as that would directly have impeded my own relation with my own supervisor (I am a PhD student) (20-29, male, Natural Sciences, PhD student, temporary, 0-5 years).
No, I decided to not report it, because I'm a junior researcher and afraid that it would affect my career possibilities (20-29, female, Medical/Life Sciences, PhD student, temporary, 0-5 years).

Even though less common, some more senior researchers also reported this fear of negative consequences:

Nothing. If I said sth., I'd be disadvantaged in all [aspects] of my work. Office politics (No age, no gender, Natural Sciences, permanent, assistant/associate prof, 16+ years).

Another frequently mentioned reason was distrust of the management’s willingness to take any corrective action, as exemplified below:

No. My department manager never takes any action on any problem (40-49, female, Social/Behavioural Sciences, PhD student, temporary, 0-5 years).

A third reason mentioned is a belief that reporting would not lead to any changes, notably due to certain individuals being protected. These responses show how issues of seniority, hierarchy and power affected respondents’ decision not to report an alleged case:

No. My old boss is too powerful in the community (30-39, male, Natural Sciences, leadership role, assistant/associate prof, temporary, 6-10 years).
Nothing, there is no going against my boss. There have even been lawsuits in the past, but the University has always covered for her (30-39, female, Law, Arts and Humanities, PhD student, temporary, 0-5 years).

In all these examples, the common hierarchical structure in academia plays a prominent role in an actor’s decision to report or not. Both conceptions of power outlined in section three (resource dependence and French and Raven’s theory of social power) become visible in the respondents’ comments. Specifically, the control of resources, such as senior colleagues’ research and promotion opportunities, seems to be a prime concern. The latter even extends beyond the research organisation’s immediate environment to the wider research field and peer community. The organisational and wider research context may thus be a source of normalising behaviour – among others due to restricted reporting.

Another response mentioned that power relations, in particular seniority, may not only affect the reporting of alleged misbehaviour, but may actually be one of its causes. A female professor in the life sciences explains how she was ‘pressured’ into behaving in dubious ways:

I was working with a more senior professor to promote findings of some research through a prestigious impact/ knowledge mobilisation event which was presenting the 'best evidence to inform practice'. The prof wanted to promote a tool we had developed as having a positive impact. Myself and the wider research team had concerns that we had no evidence of the tool's efficacy and in fact the small feasibility study had raised some concerns about its effect. We wanted to wait until the full trial was complete before promoting it as a tool. Although we had agreed as a team the limits of what could be said about the tool, I was the only team member working with the professor on this impact event and a few days before, he called me to [participate in] a teleconference with the sponsor of the event, and they both put a lot of pressure on me to allow the tool to be presented as effective. When I started to explain to the sponsor what the concerns of the team were, the professor muted the call and told me not to tell her that! It was a very intimidating situation and I felt I had to withdraw (40-49, female, Medical/Life Sciences, leadership role, assistant/associate prof, permanent, 16+ years).

The mentioned fear does not only concern fear of superiors, but also fear—especially related to such superiors—of research misconduct becoming public. Thus, while power plays a key role in discouraging reporting, senior researchers may not perceive their wielded power as self-serving, but as an effort to protect a collective interest, often the research institution’s image (although this could be considered misguided). The following quotation is from a female Ph.D. student explaining why she did not report an incident after consultation with her professor:

Only to the professor, who insisted it [should] not [be] reported to the ethics officer. They didn't want it to become public and wanted to fix it themselves (20-29, female, Medical and Life Sciences, PhD student, temporary, 0-5 years).

The following is a similar response from a more senior respondent:

The instance was not reported to maintain the reputation of the faculty. The decision was made solely by the professor involved (60-69, female, Natural Sciences, researcher, permanent).

We also examined to whom the incidents were reported. Our data suggest that the different levels of seniority have different reactions. Of our respondents, professors (26%) and associate professors (27%) informed a supervisor more often about an incident than Ph.D. students/TAs (12%) and postdocs (4%). Conversely, Ph.D. students/TAs (29%) and postdocs (35%) responded more often by talking to their colleagues about the misconduct than professors (7%) and assistant/associate professors (11%) did. Furthermore, professors and associate professors confronted the culprits more often (26% and 24%) than postdocs (17%) and Ph.D. students/TAs (12%). Although the tendency is not very strong, this may be cautiously interpreted as ‘soft’ responses dominating with regard to junior researchers, while more senior academics make use of ‘harder’ means.

Our material also showed that respondents in more senior positions more often perceived the outcome of reporting alleged misconduct as constructive, than those in junior ones. Professors experienced a constructive change (34%) most often, followed by associate professors (23%), postdocs (22%) and Ph.D. students/TAs (17%). The tendency is similar across ages. Overall, this level of reporting indicates that a perceived constructive change is uncommon across all positions, but even more so regarding the most junior academic positions; however, the limited observation size indicates that the findings should be regarded with caution.

The following responses by a female Ph.D. student are an example of ‘no change’ after reporting an incident. She reported an issue of undeserved authorship, primarily due to work-related frustrations:

To [an] ombudsperson by me (PhD student). Decision was made because I was suffering greatly as a PhD student under her (sic.) supervisor; several other instances of misconduct also applied in our work relationship, in addition to emotional abuse (being personally criticized, being called an[d] yelled at after hours, being pressured into misconduct, ...) (20-29, female, Social and Behavioural Sciences, PhD student, temporary, 0-5 years).

She continued to explain that reporting her supervisor was difficult due to the latter’s social position and, crucially, that she had refrained from reporting any misconduct ever since:

The attitude seemed to be that as a senior she [had to] know what she was doing, and as a junior researcher, I felt met with disbelief. Little action was undertaken, and I have refrained from reporting any misconduct ever since.

The expectation that action will be taken and the perceived guarantee that reporting a case will have an effective outcome can increase academics’ willingness to report cases. Organisational procedures, such as reporting to an ombudsman, could offer powerful resources to redress imbalances, but these have to provide convincing intervention opportunities. Our data indicate that building perceptions of adequate handling into procedures positively affects respondents’ willingness to report.

Finally, the issue of power was not always mentioned as a barrier to reporting others’ misconduct. Our material also had an example of a Ph.D. student who had benefited from her supervisor’s power use. This student used the survey to reflect on this incident:

I talked with the professor, who put my name on the paper; he felt he had done me a favour and I did not object. Not reported. I decided it was good for my career and I should just leave it (regarding a case where “my name was put on a paper that I had had nothing to do with”; 20-29, female, Medical/Life Science, PhD student, temporary, 0-5 years).

Work Appointment

The second variable we explored constitutes the relation between employment precarity and misconduct reporting. This was based on the assumption that researchers with temporary contracts are less likely to report alleged misconduct compared to researchers with permanent contracts. As we will show, our data suggest that the employment conditions and academic seniority patterns are similar. A confounding factor between the two variables could be that, in general, professors and associate/assistant professors are likely to have permanent positions, while PhD student, teaching assistant and Post-doc positions are usually, if not always, temporary.

Examining the answers to the respondents’ willingness to report, we found that researchers in permanent positions report incidences of suspected misconduct twice as often as those in temporary positions. Whereas 59% of researchers in permanent appointments reported such incidence, only 31% of those in temporary appointments did so.

In the qualitative responses, the respondents never explicitly mentioned temporal employment, which was only indirectly mentioned in their responses through references to power and hierarchy. For example, a common reason for researchers in temporary positions not reporting was their fear of negative career effects. The following quotation exemplifies this fear:

No, I can't because of hierarchy. It is a superior (sic) and denouncing could have a negative impact on my job (40-49, gender: other, Law, Arts and Humanities, assist/assoc prof, temporary, 11-15 years).
No, because this study was important [for] the dissertation of the PhD student who had limited time. I didn't feel like I had enough support to get out of this unharmed (male, 30-39, left academia, temporary, Social Sciences).

Others attributed their lack of reporting to the management not taking them seriously:

I did not report this explicit situation. I have however gone to the ombudsperson for similar situations (p-hacking, unauthorized authorship, ...) and was again discarded (sic) as […] the junior one with no experience (20-29, female, Social Sciences, PhD student, temporary, 0-5 years)

Such responses and ways of reasoning were far more common in respect of respondents with temporary work contracts compared to their permanently appointed colleagues. The same holds for another common response, namely a lack of knowledge regarding what to do, i.e. where and how to report:

No, I wasn't directly working with him anymore when I found out and I didn't know what to do (30-39, female, Social and Behavioural Sciences, assistant/associate prof, temporary, 11-15 years).
No, no idea where I can report this (30-39, female, Law, Arts and Humanities, assistant/associate prof, temporary, 6-10 years).

We also examined the variation in researchers with permanent and temporary work contracts’ types of responses. The results indicate that researchers in permanent positions confront the culprits more often (26%) than those in temporary positions (14%). Such confrontations often involved students or other co-authors, as in the following example:

[I] talked to the persons involved, co-authors, and reported [this misconduct] to a faculty representative specialized in misconduct (60-69, male, Medical and Life Sciences, professor, permanent, 16+ years).

Researchers in permanent positions also informed their superiors more often (26%) than those in temporary positions (14%). Similarly, researchers in temporary positions ‘did nothing’ more often (23%) than their colleagues in permanent positions (15%). These researchers also talked to colleagues more often (24%) than those in permanent positions (11%). In the qualitative responses, these statements were usually not explained or narrated, but emerged in the form of “I did nothing” or “I discussed it with my colleagues”. However, there were noteworthy exceptions:

Nothing, because the present head of department (new last author [of] this paper) tries to eliminate me and I am completely dependent [on] him (female, 60-69, Medical/Life Sciences, assistant/associate prof, temporary, 16+ years).
Nothing, due to the principal researcher’s wish (30-39, male, Medical/Life Sciences, temporary, assistant/associate prof, 0-5 years).

Here too, the dependency on research and career resources that staff members in more permanent positions control seems crucial in terms of the use of power relations.

Finally, we examined the variation in researchers with temporary and those with permanent positions’ perception of the outcomes of reporting. Our data suggest that a larger percentage of researchers with permanent positions report a constructive change, as shown in the following:

[I] asked the editor of the journal to withdraw the paper; the thesis itself did not suffer, as the citation and reference were included in another chapter in the same thesis (60-69, male, Medical and Life Sciences, professor, permanent, 16+ years).
This person got fired (30-39, female, Social and Behavioural Sciences, associate/assistant professor, temporary, 6-10 years).

Only researchers in permanent positions reported negative outcomes (7%), such as “the relationship with the author involved is somewhat troubled” (60–69, female, Medical and Life Sciences, professor, permanent, 11–15 years). Consequently, while a fear of reporting’s potential negative consequences is often presented as a reason for not reporting a case, there are hardly any known or acknowledged consequences at all in practice. This refers specifically to researchers with temporary contracts.

Finally, researchers in temporary positions reported ‘no change’ to a greater extent (55%) than those in permanent positions (38%). These responses were usually not narrated, but expressed in the form of “no change” or “Not much—but at least more of my earlier work is cited” (30–39, male, Natural Sciences, assistant/associate professor, temporary, 6–10 years).

In our sample, therefore, researchers in permanent positions reported misconduct more often than those in temporary positions, and their reporting is more likely to have a constructive result. This indicates that, in academia, the type of work appointment may have an effect on the reporting practices and their outcomes. Possible reasons for the difference between the two groups are that researchers in temporary positions feel they have more to lose by reporting and are less interested in it, because they identify less with the work organisation.

The third variable we explore concerns gender differences in the reporting of alleged misconduct and the consequences of this. This exploration was based on the literature’s assertion that men are more likely to report misconduct and to perceive the consequences as more constructive than women. However, contrary to our expectation and as we will show, few of the differences are related to gender.

There was little difference between men and women regarding the reporting of alleged research misconduct. 51% (N = 50) of the men and 45% (N = 40) of the women claimed to have reported a witnessed case of misbehaviour.

Neither does there seem to be substantial gender differences regarding reactions to misconduct. With regard to the most commonly reported forms of acting upon cases of alleged misconduct, 24% of women and 20% of men reported having confronted the culprits. The same is true of ‘talking to colleagues’, which 19% of women and 14% of men did, and of 'informing supervisor’, which 20% of women and 22% of men did. ‘No response’ was selected by 16% of the women and 18% of the men.

Finally, our data show that there are no substantial differences between men (27%) and women’s (26%) perceptions of reporting having constructive consequences. Women report a slightly higher number of negative consequences (6%) than men (3%), but the relative difference is small and the absolute number is so low that we cannot draw any clear conclusions from this result.

In the qualitative responses, there were also no noteworthy differences between men and women regarding how the instances of the reporting of misconduct and reactions to this were articulated and made sense of. None of the responses used wording related to gender. In terms of our data, gender does not seem to be distinctively related to researchers’ reporting of misconduct, or to the outcomes of their reporting.

Type of Misconduct Reported

The last variable we analysed concerns the type of misconduct witnessed. Although not directly related to power structures, this variable probably influences reporting behaviour, since more clear-cut types of research misconduct may be more readily reported than more nuanced forms of misconduct (Near and Miceli 1985 ; Mesmer-Magnus and Viswesvaran 2005 ). We distinguish between fabrication, falsification and plagiarism (FFP) as clear-cut forms of misconduct, and the rest (QRP) as more nuanced forms of misconduct (see “Appendix”).

Table 2 shows that plagiarism was the most commonly reported type of misconduct, followed by authorship issues, fabrication, cherry picking, falsification, text recycling, and data manipulation. Taking relative rather than absolute numbers into consideration, we conclude that the more contentious forms of misconduct, such as authorship and cherry picking, have a lower reporting ratio than the more clear-cut forms, such as plagiarism, falsification and fabrication.

We also examined the perceived outcomes of reporting with regard to the different forms of misconduct. Table 3 shows that plagiarism has the highest number of constructive consequences related to reporting, followed by cherry picking and falsification. In terms of no change, authorship has the highest number, followed by plagiarism and cherry picking.

These numbers also reflect the contours of the difference between the clear-cut forms of misconduct and the more contentious forms. Likewise, as we noted earlier, there is a generally remarkably low chance of experiencing a positive, constructive consequence of reporting, with a much higher rate of respondents mentioning perceiving no changes at all.

Overall, therefore, we found that clear-cut cases of misbehaviour are reported more often than nuanced cases. Since nuanced cases are difficult to assess normatively, which could make a (formal) misconduct case a precarious endeavour, this could explain the difference. Likewise, given the difficulty of justifying such accusations, researchers may feel it is a too uncertain undertaking, risking long, intensive procedures and potential repercussions. Several respondents did indeed indicate as much in their answers:

No... since it's not actually forbidden, but still [an] unethical research practice (cherry picking, referring to “[w]ild and unjustified analyses until there is a significant result”; 30-39, female, Social Sciences, PhD student, temporary, 0-5 years).
I reported to the direct supervisor, but was sure I could not go beyond that, as that would directly have impeded my own relation with my own supervisor (I am a PhD student). Since this is an instance of [an] incomplete assessment of [an] error on results, this does not seem to classify as direct misconduct (20-29, male, natural sciences, PhD student, temporary, 0-5 years).

Our survey results suggest that demographic differences affect the likelihood that alleged misconduct cases are acted upon, i.e. reported and dealt with constructively. Firstly, the analysis suggests that younger researchers, researchers with temporary appointments and those in lower academic positions are less likely to report misconduct, compared to their more senior and permanently appointed colleagues. Secondly, contested forms of misconduct (e.g. authorship, cherry picking of data) seem to be reported less than more clear-cut instances of misconduct (e.g. plagiarism, text recycling and the falsification of data).

These trends, which emerge quantitatively, can be meaningfully interpreted in combination with the qualitative data. In the respondents’ answers to the open-ended survey questions, they frequently attribute their decisions regarding whether and how to report to hierarchy or power issues. Some respondents do this very explicitly by referring to their previous superior, or to their current hierarchical relationship with their supervisor. Others hint more implicitly at hierarchy and power issues, implying power imbalances in the sense of resource dependence.

Based on the numerical data and our respondents’ interpretation of these and the context, we can refine and translate the general claims made in the whistleblowing literature into specific propositions on reporting alleged research misbehaviour at academic institutes. Owing to the sample size limitations and the high diversity among participating research institutes, our results might be insufficient to draw definitive, statistically relevant conclusions, but we do believe that they provide strong indications that future studies could verify. We will formulate the propositions relating to the variables studied separately: academic seniority, the temporality of work contracts, gender, and the type of misconduct witnessed.

The first set of statements involves academic seniority and age. Overall, the results suggest that seniority in research constitutes a valuable resource in respect of reporting alleged research misconduct. This arguably manifests itself through access to important resources during the reporting process. Our respondents maintained that these resources include, for example, the knowledge to identify such misconduct and the most suitable reporting channel, the social capital and power position to act on misconduct (e.g. due to the low perceived career risk), as well as the ability to follow up on cases in order to secure a constructive outcome. Junior researchers may not possess the same knowledge and social capital, and fear harming their academic careers. These exploratory findings suggest the following propositions for further hypothesis-testing research:

Proposition 1: Researchers in junior academic positions and younger researchers are less likely to report instances of alleged research misconduct compared to more senior and older researchers. Proposition 2: Reporting by researchers in junior positions or younger researchers is less likely to lead to constructive consequences compared to that of more senior and older researchers.

The second set of statements involves the temporality of work appointments. It is clear that for researchers and all other forms of employees, temporary contracts constitute an element of power imbalance. Temporary employment essentially entails that these employees are excluded from tenured work agreements and therefore face the risk of their work contracts not being renewed. It is reasonable to believe this, and our results also provide initial evidence of such a lack of power and social capital manifesting itself in these researchers’ reporting behaviour. For example, their fear of negative personal consequence in the aftermath of reporting alleged misconduct, which respondents with temporary contracts expressed more often than others, is a manifestation of their lack of power. This leads us to the second set of propositions:

Proposition 3: Researchers with temporary contracts are less likely to report instances of alleged research misconduct than those with permanent contracts. Proposition 4: Reporting by researchers with temporary contracts is less likely to lead to constructive consequences than reporting by those with permanent contracts.

The third set of statements involves gender. Somewhat surprisingly, we did not find any strong indications that gender differences play a role in reporting alleged research misconduct. However, we believe the theoretical underpinnings of gender as a central dimension of power imbalances in organisations and in academia more specifically (Aagaard 2016 ; Grilli and Allesina 2017 ; Treviño et al. 2017 ) are strong enough to warrant further studies. Contrary to our findings, we thus propose the following propositions:

Proposition 5: Female witnesses of alleged research misconduct are less likely to report such instances than their male colleagues. Proposition 6: Reporting by female researchers is less likely to lead to constructive consequences than reporting by male researchers.

The fourth and final statement category involves the characteristics of the alleged misconduct. Although not a distinct power dimension, it does shed light on the likelihood of researchers’ perception that their complaint will be regarded seriously, acted upon and lead to constructive consequences. Given that clear cases of misconduct are easier to identify and their reporting easier to justify (Miceli and Near 2005 ), we propose the following final statement:

Proposition 7: Researchers are more likely to report clear-cut instances of alleged research misconduct than more nuanced or ‘grey areas’ of misbehaviour.

The lack of reporting of ‘grey’ forms of misconduct is due to the crucial negative effect of such forms of misconduct being potentially continued. In other words, not only are such forms of misconduct per definition difficult to assess normatively, they are also likely to be more unspoken and implied in research. This involves the risk of such practices becoming embedded and institutionalised rather than openly discussed and reflected upon. Indeed, institutional or national integrity committees’ processing of allegations of misconduct has often led to the codification of research practices (Horbach et al. 2018 ). Consequently, if cases of specific types of alleged misconduct are not reported and integrity committees cannot assess them subsequently, they may not be classified as either proper or improper research practices. The assessment of such research practices is hence in need of further research.

Conclusion and Recommendations

In this study, we have analysed reporting as one of the social control mechanisms flagging misbehaviour in science. In particular, we have studied the actors who are most likely to report alleged misconduct, how they report, and the consequences of reporting. We found differences in the rate of reporting and the consequences thereof, depending on the demographic characteristics of the person witnessing the case.

These insights contribute to the literature on research misconduct in two ways. Firstly, to our knowledge, we provide the first systematic insights into researchers’ reasons and explanations for reporting, or not reporting, witnessed misconduct. We find indications that younger researchers, researchers with temporary appointments and those in lower academic positions are less likely to act and report than their senior and permanently appointed colleagues. The crucial hurdles for not reporting are these researchers’ concerns that this may harm their career and their expectation of not being taken seriously, both of which are rooted in power relations and hierarchical differences leading to resource dependence.

We also find that contested forms of misconduct (e.g. authorship, cherry picking of data and fabrication of data) are less likely to be reported than more clear-cut instances of misconduct (e.g. plagiarism, text recycling and falsification of data). The respondents mention that minor misbehaviour is not considered worth reporting, or express doubts about the effectiveness of reporting a case when the witnessed behaviour does not explicitly transgress norms, such as with many of the QRPs. Concern about reporting’s negative consequences, such as career opportunities or organisational reputations being harmed, is always taken into considerations.

Secondly, we have theorised the relationship between power differences and researchers’ willingness to report—in particular the role of seniority, work appointments and gender. We have derived a list of seven propositions that we believe warrant testing and refinement in future studies using a larger sample to help with further theory building about power differences and research misconduct. More specifically, by focusing on such structural power dimensions, we provide a different perspective than most prior studies of scientific misconduct, which have mainly focused on the negative consequences for the individual wrongdoer and his/her colleagues. We thus open up a broader organisational understanding of the mechanisms that impact researchers’ ability and willingness to successfully report misconduct.

Based on our study, we argue that establishing adequate reporting procedures is a prime requirement to empower less powerful members of the research community to report scientific misbehaviour. This may also specifically strengthen one of science’s most important social control mechanisms in which direct colleagues check each other’s work. Following Lukes’s and Bachrach and Baratz’s conception of power in the form of agenda setting (Lukes 2005 ; Bachrach and Baratz 1962 ), reporting procedures are a prime way of making latent and covert interests visible, thereby demanding decisions from those in power.

Our findings may have several implications for policy. We argue that policy interventions, such as research integrity courses for junior researchers, the articulation of research integrity codes, or integrity boards have to consider the power imbalances in research organisations. Our results suggest a need for improved reporting procedures. Specifically, such procedures should take the position of an organisation’s less powerful members, such as junior researchers and people with temporary work appointments, into account and facilitate their reporting. This requires procedures that effectively address issues of power imbalance and the fear of not being taken seriously. The implementation of such procedures could help target a culture of complacency and cynicism that normalises questionable research practices. In addition, it may contribute to a sense of organisational responsibility that should ultimately foster a climate of research integrity.

Aagaard, K. (2016). New and persistent gender equality challenges in academia. Scandinavian Journal of Public Administration, 20 (1), 87–90.

Google Scholar  

Al-Marzouki, S., Evans, S., Marshall, T., & Roberts, I. (2005). Are these data real? Statistical methods for the detection of data fabrication in clinical trials. BMJ, 331 (7511), 267–270. https://doi.org/10.1136/bmj.331.7511.267 .

Article   Google Scholar  

Anderson, M. S., Ronning, E. A., De Vries, R., & Martinson, B. C. (2007). The perverse effects of competition on scientists’ work and relationships. Science and Engineering Ethics, 13 (4), 437–461.

Azoulay, P., Bonatti, A., & Krieger, J. L. (2017). The career effects of scandal: Evidence from scientific retractions. Research Policy, 46 (9), 1552–1569. https://doi.org/10.1016/j.respol.2017.07.003 .

Bachrach, P., & Baratz, M. S. (1962). The two faces of power. American Political Science Review, 56 , 941–952.

Biagioli, M., Kenney, M., Martin, B. R., & Walsh, J. P. (2019). Academic misconduct, misrepresentation and gaming: A reassessment. Research Policy, 48 (2), 401–413. https://doi.org/10.1016/j.respol.2018.10.025 .

Bjørkelo, B., & Matthiesen, S. B. (2012). Preventing and dealing with retaliation against whistleblowers. In W. Vandekerckhove, & D. Lewis (Eds.), Whistleblowing and democratic values: The international whistleblowing research network .

Bouter, L. M., Tijdink, J., Axelsen, N., Martinson, B. C., & ter Riet, G. (2016). Ranking major and minor research misbehaviors: Results from a survey among participants of four World Conferences on Research Integrity. Research Integrity and Peer Review, 1 (1), 17.

Bowie, N. E. (2010). Organizational integrity and moral climates. In G. G. Brenkert & T. L. Beauchamp (Eds.), The oxford handbook of business ethics . Oxford: Oxford University Press.

Broome, M. E., Pryor, E., Habermann, B., Pulley, L., & Kincaid, H. (2005). The Scientific Misconduct Questionnaire—Revised (SMQ-R): Validation and psychometric testing. Accountability in Research, 12 (4), 263–280. https://doi.org/10.1080/08989620500440253 .

Callaway, E. (2015). Faked peer reviews prompt 64 retractions. Nature News .

Cassematis, P. G., & Wortley, R. (2013). Prediction of whistleblowing or non-reporting observation: The role of personal and situational factors. Journal of Business Ethics, 117 (3), 615–634. https://doi.org/10.1007/s10551-012-1548-3 .

Clair, J. A. (2015). Procedural injustice in the system of peer review and scientific misconduct. Academy of Management Learning & Education, 14 (2), 159–172. https://doi.org/10.5465/amle.2013.0243 .

Culiberg, B., & Mihelic, K. K. (2017). The evolution of whistleblowing studies: A critical review and research agenda. Journal of Business Ethics, 146 (4), 787–803. https://doi.org/10.1007/s10551-016-3237-0 .

Dozier, J. B., & Miceli, M. P. (1985). Potential predictors of whistleblowing—A pro-social behavior perspective. Academy of Management Review, 10 (4), 823–836. https://doi.org/10.2307/258050 .

Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4 (5), 11. https://doi.org/10.1371/journal.pone.0005738 .

Fanelli, D., Costas, R., & Lariviere, V. (2015). Misconduct policies, academic culture and career stage, not gender or pressures to publish, affect scientific integrity. PLoS ONE, 10 (6), 18. https://doi.org/10.1371/journal.pone.0127556 .

Fanelli, D., Costas, R., Fang, F. C., Casadevall, A., & Bik, E. M. (2017). Why do scientists fabricate and falsify data? A matched-control analysis of papers containing problematic image duplications. bioRxiv . https://doi.org/10.1101/126805 .

Forsberg, E.-M., Anthun, F. O., Bailey, S., Birchley, G., Bout, H., Casonato, C., et al. (2018). Working with research integrity—Guidance for research performing organisations: The Bonn PRINTEGER statement. Science and Engineering Ethics, 24 (4), 1023–1034. https://doi.org/10.1007/s11948-018-0034-4 .

French, J., Raven, B., & Cartwright, D. (1959). The bases of social power. Classics of Organization Theory, 7 , 311–320.

Gao, J. Y., Greenberg, R., & Wong-On-Wing, B. (2015). Whistleblowing intentions of lower-level employees: The effect of reporting channel, bystanders, and wrongdoer power status. Journal of Business Ethics, 126 (1), 85–99. https://doi.org/10.1007/s10551-013-2008-4 .

Grilli, J., & Allesina, S. (2017). Last name analysis of mobility, gender imbalance, and nepotism across academic systems. Proceedings of the National Academy of Sciences, 114 (29), 7600–7605.

Guston, D. H. (2007). Between politics and science: Assuring the integrity and productivity of reseach . Cambridge: Cambridge University Press.

Habermann, B., Broome, M., Pryor, E. R., & Ziner, K. W. (2010). Research coordinators experiences with scientific misconduct and research integrity. Nursing Research, 59 (1), 51–57. https://doi.org/10.1097/NNR.0b013e3181c3b9f2 .

Hopp, C., & Hoover, G. A. (2017). How prevalent is academic misconduct in management research? Journal of Business Research, 80 , 73–81. https://doi.org/10.1016/j.jbusres.2017.07.003 .

Horbach, S. P. J. M., & Halffman, W. (2017). The ghosts of HeLa: How cell line misidentification contaminates the scientific literature. PLoS ONE, 12 (10), e0186281. https://doi.org/10.1371/journal.pone.0186281 .

Horbach, S. P. J. M., & Halffman, W. (2019). The extent and causes of academic text recycling or ‘self-plagiarism’. Research Policy, 48 (2), 492–502. https://doi.org/10.1016/j.respol.2017.09.004 .

Horbach, S. P. J. M., Breit, E., & Mamelund, S.-E. (2018). Organisational responses to alleged scientific misconduct: Sensemaking, sensegiving, and sensehiding. Science and Public Policy, 46 (3), 415–429. https://doi.org/10.1093/scipol/scy068 .

LaFollette, M. C. (1992). Stealing into print: Fraud, plagiarism, and misconduct in scientific publishing . Berkeley: University of California Press.

Lukes, S. (2005). Power: A radical view (2nd ed.). Basingstoke: Palgrave MacMillan.

Book   Google Scholar  

Mamelund, S.-E., Breit, E., & Forsberg, E.-M. (2018). A multinational survey on research misconduct and integrity: A workfloor perspective (DIV. 2) . Oslo: PRINTEGER.

Martinson, B. C., Crain, A. L., De Vries, R., & Anderson, M. S. (2010). The importance of organisational justice in ensuring research integrity. Journal of Empirical Research on Human Research Ethics, 5 (3), 67–83. https://doi.org/10.1525/jer.2010.5.3.67 .

Mesmer-Magnus, J. R., & Viswesvaran, C. (2005). Whistleblowing in organizations: An examination of correlates of whistleblowing intentions, actions, and retaliation. Journal of Business Ethics, 62 (3), 277–297. https://doi.org/10.1007/s10551-005-0849-1 .

Miceli, M. P., & Near, J. P. (2005). Standing up or standing by: What predicts blowing the whistle on organizational wrongdoing? In J. Martocchio (Ed.), Research in personnel and human resources management (Vol. 24, pp. 95–136). Greenwich, CT: JAI/Elsevier Press.

Chapter   Google Scholar  

Near, J. P., & Miceli, M. P. (1985). Organizational dissidence—The case of whistle-blowing. Journal of Business Ethics, 4 (1), 1–16. https://doi.org/10.1007/bf00382668 .

Near, J. P., & Miceli, M. P. (2016). After the wrongdoing: What managers should know about whistleblowing. Business Horizons, 59 (1), 105–114.

Paine, L. S. (1994). Managing for organizational integrity. Harvard Business Review, 72 (2), 106–117.

Palazzo, G. (2007). Organizational integrity—understanding the dimensions of ethical and unethical behavior in corporations. In W. C. Zimmerli, M. Holzinger, & K. Richter (Eds.), Corporate ethics and corporate governance (pp. 113–128). Berlin: Springer.

Palmer, D. (2012). Normal organizational wrongdoing: A critical analysis of theories of misconduct in and by organizations . Oxford: Oxford University Press.

Park, H., & Lewis, D. (2018). The negative health effects of external whistleblowing: A study of some key factors. The Social Science Journal . https://doi.org/10.1016/j.soscij.2018.04.002 .

PRINTEGER. (2016). Retrieved April 20, 2016 from Documents and results . https://printeger.eu/documents-results/ .

Sacco, D. F., Bruton, S. V., & Brown, M. (2018). In defense of the questionable: Defining the basis of research scientists’ engagement in questionable research practices. Journal of Empirical Research on Human Research Ethics, 13 (1), 101–110. https://doi.org/10.1177/1556264617743834 .

Santoro, D., & Kumar, M. (2018). Speaking truth to power: A theory of whistleblowing (Vol. 6). Berlin: Springer.

Sarewitz, D. (2016). The pressure to publish pushes down quality. Nature, 533 (7602), 147–147.

Schulz, J. B., Cookson, M. R., & Hausmann, L. (2016). The impact of fraudulent and irreproducible data to the translational research crisis—Solutions and implementation. Journal of Neurochemistry, 139 , 253–270. https://doi.org/10.1111/jnc.13844 .

Silverman, D. (2016). Qualitative research (Vol. 3). London: Sage.

Stitzel, B., Hoover, G. A., & Clark, W. (2018). More on plagiarism in the social sciences*. Social Science Quarterly . https://doi.org/10.1111/ssqu.12481 .

Strauss, A., & Corbin, J. (1990). Basics of qualitative research . London: Sage Publications.

Stroebe, W., Postmes, T., & Spears, R. (2012). Scientific misconduct and the myth of self-correction in science. Perspectives on Psychological Science, 7 (6), 670–688. https://doi.org/10.1177/1745691612460687 .

Tijdink, J. K., Bouter, L. M., Veldkamp, C. L. S., van de Ven, P. M., Wicherts, J. M., & Smulders, Y. M. (2016). Personality traits are associated with research misbehavior in Dutch scientists: A cross-sectional study. PLoS ONE, 11 (9), e0163251. https://doi.org/10.1371/journal.pone.0163251 .

Treviño, L. J., Balkin, D. B., & Gomez-Mejia, L. R. (2017). How “doing gender” leads to gender imbalances in the higher ranks in colleges of business [and how to “undo gender”]. Academy of Management Learning & Education, 16 (3), 439–453.

Vandekerckhove, W. (2016). Whistleblowing and organizational social responsibility: A global assessment . Abingdon: Routledge.

Yang, W. (2013). Research integrity in China. Science, 342 (6162), 1019–1019. https://doi.org/10.1126/science.1247700 .

Download references

This work was supported by the European Union’s Horizon 2020 research and innovation programme, grant agreement number 665926 (PRINTEGER).

Author information

Serge P. J. M. Horbach and Eric Breit have contributed equally to this article.

Authors and Affiliations

Faculty of Science - Institute for Science in Society, Radboud University Nijmegen, P.O. Box 9010, 6500 GL, Nijmegen, The Netherlands

Serge P. J. M. Horbach & Willem Halffman

Center for Science and Technology Studies, Leiden University, Leiden, The Netherlands

Serge P. J. M. Horbach

Work Research Institute, OsloMet – Oslo Metropolitan University, St. Olavs Plass, P.O. Box 4, 0130, Oslo, Norway

Eric Breit & Svenn-Erik Mamelund

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Serge P. J. M. Horbach .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (PDF 701 kb)

Supplementary file2 (xlsx 73 kb), appendix: overview of coding and illustrative examples, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Horbach, S.P.J.M., Breit, E., Halffman, W. et al. On the Willingness to Report and the Consequences of Reporting Research Misconduct: The Role of Power Relations. Sci Eng Ethics 26 , 1595–1623 (2020). https://doi.org/10.1007/s11948-020-00202-8

Download citation

Received : 06 March 2019

Accepted : 15 February 2020

Published : 26 February 2020

Issue Date : June 2020

DOI : https://doi.org/10.1007/s11948-020-00202-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research integrity
  • Research misconduct
  • Whistleblowing
  • Power relations
  • Organisations
  • Find a journal
  • Publish with us
  • Track your research

Image of Gollum

For too long, some scientists have acted like Gollums of the ivory tower, guarding precious study sites, model organisms, and even entire fields of inquiry.

research misconduct

Paolo Macchiarini stands at a podium with a microphone

usa flag

  • Policy & Compliance
  • Research Misconduct

Research Misconduct - Definitions

Definitions.

Fabrication:  Making up data or results and recording or reporting them

Falsification:  Manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record.

Plagiarism:  The appropriation of another person's ideas, processes, results, or words without giving appropriate credit.

This page last updated on: November 29, 2018

  • Bookmark & Share
  • E-mail Updates
  • Help Downloading Files
  • Privacy Notice
  • Accessibility
  • National Institutes of Health (NIH), 9000 Rockville Pike, Bethesda, Maryland 20892
  • NIH... Turning Discovery Into Health

Jump to navigation

Search form

Home

The Graduate School

  • Faculty/Staff Resources
  • Programs of Study Browse the list of MSU Colleges, Departments, and Programs
  • Graduate Degree List Graduate degrees offered by Michigan State University
  • Research Integrity Guidelines that recognize the rights and responsibilities of researchers
  • Online Programs Find all relevant pre-application information for all of MSU’s online and hybrid degree and certificate programs
  • Graduate Specializations A subdivision of a major for specialized study which is indicated after the major on official transcripts
  • Graduate Certificates Non-degree-granting programs to expand student knowledge and understanding about a key topic
  • Interdisciplinary Graduate Study Curricular and co-curricular opportunities for advanced study that crosses disciplinary boundaries
  • Theses and Dissertations Doctoral and Plan A document submission process
  • Policies and Procedures important documents relating to graduate students, mentoring, research, and teaching
  • Academic Programs Catalog Listing of academic programs, policies and related information
  • Traveling Scholar Doctoral students pursue studies at other BTAA institutions
  • Apply Now Graduate Departments review applicants based on their criteria and recommends admission to the Office of Admissions
  • International Applicants Application information specific to international students
  • PhD Public Data Ph.D. Program Admissions, Enrollments, Completions, Time to Degree, and Placement Data
  • Costs of Graduate School Tools to estimate costs involved with graduate education
  • Recruitment Awards Opportunities for departments to utilize recruitment funding
  • Readmission When enrollment is interrupted for three or more consecutive terms
  • Assistantships More than 3,000 assistantships are available to qualified graduate students
  • Fellowships Financial support to pursue graduate studies
  • Research Support Find funding for your research
  • Travel Funding Find funding to travel and present your research
  • External Funding Find funding outside of MSU sources
  • Workshops/Events Find opportunities provided by The Graduate School and others
  • Research Opportunities and programs for Research at MSU
  • Career Development Programs to help you get the career you want
  • Teaching Development Resources, workshops, and development opportunities to advance your preparation in teaching
  • Cohort Fellowship Programs Spartans are stronger together!
  • The Edward A. Bouchet Graduate Honor Society (BGHS) A national network society for students who have traditionally been underrepresented
  • Summer Research Opportunities Program (SROP) A gateway to graduate education at Big Ten Academic Alliance universities
  • Alliances for Graduate Education and the Professoriate (AGEP) A community that supports retention, and graduation of underrepresented doctoral students
  • Recruitment and Outreach Ongoing outreach activities by The Graduate School
  • Diversity, Equity, and Inclusion Funding Funding resources to recruit diverse students
  • Graduate Student Organizations MSU has over 900 registered student organizations
  • Grad School Office of Well-Being Collaborates with graduate students in their pursuit of their advanced degree and a well-balanced life
  • Housing and Living in MI MSU has an on and off-campus housing site to help find the perfect place to stay
  • Mental Health Support MSU has several offices and systems to provide students with the mental health support that they need
  • Spouse and Family Resources MSU recognizes that students with families have responsibilities that present challenges unique to this population
  • Health Insurance Health insurance info for graduate student assistants and students in general at MSU
  • Safety and Security MSU is committed to cultivating a safe and inclusive campus community characterized by a culture of safety and respect
  • Why Mentoring Matters To Promote Inclusive Excellence in Graduate Education at MSU
  • Guidelines Guidelines and tools intended to foster faculty-graduate student relationships
  • Toolkit A set of resources for support units, faculty and graduate students
  • Workshops Workshops covering important topics related to mentor professional development
  • About the Graduate School We support graduate students in every program at MSU
  • Strategic Plan Our Vision, Values, Mission, and Goals
  • Social Media Connect with the Graduate School!
  • History Advancing Graduate Education at MSU for over 25 years
  • Staff Directory
  • Driving Directions

RECR Workshop: Research Misconduct and Detrimental Research Practices

Conduct in research spans a spectrum, from that performed with integrity to true research misconduct. Accusations of misconduct are incredibly serious, and the consequences grave. This session will help you understand the spectrum of behaviors that can arise while performing research and scholarly activities. Examples of misconduct and detrimental research practices will be presented. There will be discussions about best practices to follow to avoid research and scholarly activity misconduct and detrimental practices, and to whom one should go to with concerns about research activities. The session will review the processes of reporting, investigating, and reviewing alleged misconduct.

More details on the  RECR workshops page .

Michigan State University Wordmark

  • Call us: (517) 353-3220
  • Contact Information
  • Privacy Statement
  • Site Accessibility
  • Call MSU: (517) 355-1855
  • Visit: msu.edu
  • MSU is an affirmative-action, equal-opportunity employer.
  • Notice of Nondiscrimination
  • Spartans Will.
  • © Michigan State University

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Clin Res
  • v.4(2); Apr-Jun 2013

Fraud and misconduct in clinical research: A concern

Ashwaria gupta.

Sr. CRA Group Head, Novartis Health Care Private Limited, Sandoz House, Worli, Mumbai, India

Fraud and misconduct in clinical research is widespread. Good clinical practice is a guideline adopted internationally as standard operating procedure for conduct of clinical research. Despite these guidelines being available, unavailability of internationally harmonized framework for managing research fraud and misconduct makes clinical research a highly vulnerable area to commit fraud. Fraud could be of various types and due to various reasons. Whatever the circumstances be, any fraud should be dealt with strictly and regulations should be in place to prevent its occurrence.

INTRODUCTION

Scientific fraud reappears with alarming consistency from paleontology to nanotechnology. Several studies have found that more than 40% of surveyed researchers were aware of misconduct but did not report it. Sheehan et al . reported in 2005 that 17% of surveyed authors of clinical drug trials reported that they personally knew of fabrication in research occurring over the previous 10 years.[ 1 ] Quality at sites is usually judged by audits and inspections. There has been as high as 23% (official action indicated) for cause inspections conducted by US Food and drug Administration (USFDA) over the last several years.[ 2 ] These kinds of results indicate that there exists a substantial problem. Fraud/misconduct can lead to study losing its entire credibility. Moreover, it can lead to ineffective or harmful treatment being available or patients being denied of effective treatment. This article discusses the difference between fraud and misconduct, the possible reasons for the occurrence of the same and explores options, which can possibly help prevent such instances.

IS FRAUD AND MISCONDUCT THE SAME?

Fraud and misconduct are the two terminologies often used interchangeably. However, there is a gross distinction between the two. Scientific misconduct/fraud is a violation of the standard codes of scholarly conduct and ethical behavior in scientific research. Definition of fraud as defined in court is “the knowing breach of the standard of good faith and fair dealing as understood in the community, involving deception or breach of trust, for money.”[ 1 ] Fraud is an intentional deception made for personal gain or to damage another individual, for instance, intentionally falsifying and/or fabricating research data, and misleading reporting of the results. Misconduct may not be an intentional action, rather an act of poor management. It also includes failure to follow established protocols if this failure results in unreasonable risk or harm to humans.[ 3 ] Fraud should have an element of deliberate action, which is not the case with misconduct.

The Medical Research Council (MRC) definition of misconduct and fraud (or a variation of the MRC code) is widely used. This code states the following definition:

The fabrication, falsification, plagiarism or deception in proposing, carrying out or reporting results of research or deliberate, dangerous or negligent deviations from accepted practices in carrying out research. It includes failure to follow established protocols if this failure results in unreasonable risk or harm to humans, other vertebrates or the environment and facilitating of misconduct in research by collusion in, or concealment of, such actions by others. It also includes intentional, unauthorised use, disclosure or removal of, or damage to, research-related property of another, including apparatus, materials, writings or devices used in or produced by the conduct of research. It does not include honest error or honest differences in the design, execution, interpretation or judgement in evaluating research methods or results or misconduct unrelated to the research process. Similarly it does not include poor research unless this encompasses the “intention to deceive” (MRC, 1997).[ 4 ]

WHY DOES ANYONE COMMIT FRAUD/MISCONDUCT?

Reasons for fraud/misconduct in clinical Research could vary from personal to professional. Fraud could be a result of professional over ambition to become famous, a gain in prestige by being a part of international clinical trials or for financial interests. At times it could be due to laziness of the researcher or site staff for complex studies needing repeat assessments e.g., repeat blood pressure measurements, Blood pressure rounded off to nearest 5 mm, timed spirometry assessments. At times, misconduct also results when an investigator strongly believes intuitively in the “right” answer despite the available evidence being contrary.[ 5 ] Misconduct could also be due to innocent ignorance like backdating the subject's signature on a consent form because the subject forgot to date the form initially or discarding source documents after accurate transcription or even creating source documents from case record forms. Pressures for promotion and tenure, competition amongst investigators, need for recognition, ego, personality factors and conflicting personal and professional obligations are some factors, which can influence certain individuals to involve in fraud/misconduct. There could also be associated environmental factors such as amount of oversight of the study, existence of explicit versus implicit rules, penalties and rewards attached to such rules, extent of training imparted, regulations involved and insufficient mentoring.[ 6 ]

ARE THERE DIFFERENT TYPES OF FRAUD/MISCONDUCT?

Fraud can be fabrication, falsification, and plagiarism of data or even deception in conduct. Fabricating data involves creating a new record of data or results. Most commonly fabricated documents are Informed consent Forms and Patient diaries. Falsifying data means altering the existing records. It is the deliberate distortion or omission of undesired data or results. Plagiarism on the other hand is an unacknowledged presentation or exploitation of work and ideas of others’ as one's own. Deception in clinical research is the deliberate concealment of a conflict of interest or inclusion of deliberately misleading statements in research proposals or other documents.

The most common types of misconduct in clinical research are: Failure to follow an investigational plan; inadequate and inaccurate records; inadequate drug accountability; inadequate completion of informed consent forms; failure to report adverse drug reactions; failure to obtain and/or document subject consent; failure to notify an Institutional Review Board (IRB)/Ethics Committee (EC) of changes/progress reports; failure to obtain or document IRB approval.[ 3 , 7 ]

CAN RESEARCH FRAUD BE PICKED UP EARLY?

Red flags or warning signals, during the conduct of a clinical trial should prompt the monitor to be more vigilant and look at the data with a magnifying glass. For example, for patients seen at a given medical center or by a particular doctor, excessive instances of perfect attendance on the scheduled day could be a hallmark of falsified data.[ 1 ] The most important identifiers include implausible trends, e.g., 100% drug compliance, identical lab on electrocardiogram results, no serious adverse events reported, subjects adhering perfectly to a visit schedule.[ 3 ] Furthermore, certain practices or behavior at the site or by site personnel should raise suspicion in the mind of the monitor though they may not definitely indicate any kind of fraud. Major differences in trends at a particular site from other sites, unusually fast recruitment, very few withdrawals, very few adverse events being reported, all drugs being dispensed in a similar manner (e.g., all tubes of cream dispensed being pressed at the same point), repeat postponement of meetings or same pen used throughout the study are some of the indicators for a monitor to look at the site more closely.[ 8 ]

WHAT COULD BE THE IMPACT OF FRAUD?

The impact on affected individuals and the research community can be profound. Such incidents result in huge cost to the sponsor in terms of additional resource for investigating fraud and cost of possibly repeating those aspects of research, which were fraudulent. It can also leads to disciplinary action for researchers. Such a researcher may not be allowed to be a part of any advisory committee or peer review board. Any article published by such a researcher might be re-reviewed and retracted if required. Fraudulent clinical research also affects the validity of data and impacts the core of good clinical practice adversely, i.e., rights, safety and well-being of research participants. On a broader scale of impact on health-care, it can lead to wrong or ineffective or harmful molecules being brought in the market.[ 9 , 10 ]

HOW CAN WE STRENGTHEN RESEARCH MISCONDUCT AND FRAUD DETECTION?

Role of IRBs/ECs should be strengthened in safeguarding interest of research participants. They should have internal control and review mechanisms for monitoring the ethical and quality aspects of ongoing studies. Existing regulations if any must be simplified and made more effective. Should there be no existing regulations, they must be put in place to manage fraudulent issues. All organizations who are involved in clinical research should have clear operational policies and procedures for approach to research misconduct and fraud. Whistle blowers should be cultivated and there should be guidelines agreed upon internationally to safeguard them.

WHAT ARE VARIOUS COUNTRIES DOING TO MANAGE RESEARCH FRAUD?

Despite fraud being recognized as a criminal act by all nations there are no international rules, which harmonize the management and regulation of clinical research dishonesty or misconduct. Most countries do not have laws specific to manage fraud in clinical research and have adopted their own approaches. Table 1 mentions the agencies relevant to research fraud.

Agencies relevant to research fraud (adapted from Sheehan[ 1 ])

An external file that holds a picture, illustration, etc.
Object name is PCR-4-144-g001.jpg

In United States, there are different bodies like Office for Human Research Protections which provides guidance, education and clarification on human research subject protection. Another body Office of Research Integrity promotes integrity in biomedical and behavioral research. FDA plays a major role in prevention and detection of fraud. If the site has not complied with regulatory requirements or has engaged in fraudulent activity, FDA has the power to disqualify the investigator from taking part in further research. National Research Ethical Council of Finland produces guidelines for prevention and investigations of alleged scientific dishonesty. However, responsibility of taking actions against those found guilty remains with universities and research institutes. In Denmark, Danish Committee on Scientific Dishonesty, which was split into three groups often sit together to consider cases and can recommend sanctions to be taken in cases of fraud. National Committee for the Evaluation of Dishonesty in Health Research in Norway since 1994 reports findings to the institution and the involved parties, but again leaves any sanctions up to the employers. In Sweden, the institutions conduct their own investigations, with an expert advisory group, linked to the Swedish MRC (MFR, providing guidance. Every institution in Germany also has its own committee to investigate and suggest actions in cases of suspected research misconduct. The Committee of Inquiry on Allegations of Scientific Misconduct investigates allegations of scientific misconduct carried out by those who receive deutsche forschungsgemeinschaft- an academic research funding agency funding. If scientific misconduct is established, the committee's findings are forwarded to the central steering Joint Committee with a recommendation. France has a principle medical body (De’le’ gation á l’IntégriteéScientifique) to focus on both the prevention of research fraud and the sanctions to be taken against individuals or institutions found guilty.[ 7 , 11 ] National Panel for Research Integrity has been proposed in United Kingdom as a joint venture between UK Universities and Department of Health to provide independent support to the Health and Biomedical Sciences Research Community to establish and demonstrate effective systems for research integrity and share/promote best practice.[ 12 ]

India also has no specific law pertaining to scientific fraud. The responsibility of investigating and taking action against fraudulent instances remains with the Universities or sponsors or Institutions and then they need to notify the same to Drug Controller General of India, a central body, which is responsible for approval of clinical trials in India.

CAN WE PREVENT FRAUD FROM EVER HAPPENING?

It probably might not be possible to completely prevent fraud but definitely measures can be taken to reduce its incidence to a great extent. In a 2008 “Nature” article entitled “Repairing Research Integrity,” Titus et al .[ 13 ] listed six strategies to champion research integrity:

  • Adopt zero tolerance-all suspected misconduct must be reported and all allegations must be thoroughly and fairly investigated.
  • Protect whistle-blowers-careful attention must be paid to the creation and dissemination of measures to protect whistleblowers.
  • Clarify how to report-establish clear policies, procedures and guidelines related to misconduct and responsible conduct.
  • Train the mentors-researchers must be educated to pay more attention to how they work with their junior team members.
  • Use alternative mechanisms-institutions need continuing mechanisms to review and evaluate the research and training environment of their institution, such as internal auditing of research records.
  • Model ethical behavior-institutions successfully stop cheating when they have leaders who communicate what is acceptable behavior, develop fair and appropriate procedures for handling misconduct cases, develop and promote ethical behavior and provide clear deterrents that are communicated.

CONCLUSION: CAN WE BUILD THE ‘CULTURE’ OF RESEARCH?

Research fraud is a reality which nobody can shy away from. Furthermore, clinical research is very vulnerable to fraud due to no effective mechanism in place for detecting, investigating and prosecuting fraud in most of the countries. Thus, it is very critical that “culture of research” be developed within the system, which should be based on basic fundamentals of integrity, openness and honest work. There must be official bodies in the country, which could investigate and prosecute clinical research fraud. Every organization involved in clinical research should have and implement clear policies and Standard Operating Procedures (SOPs), which encourage disclosures of fraud conduct. No matter the circumstances surrounding the case, research fraud should be considered very serious and should not be taken lightly. Open communication amongst the research groups on this important aspect of clinical research in addition to discussion on ongoing projects and practices may help reduce the incidence of fraud if not completely prevent it. Finally, the emphasis should be more on quality rather than quantity.

Source of Support: Nil

Conflict of Interest: None declared.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 03 June 2019

Make reports of research misconduct public

article on research misconduct

  • C. K. Gunsalus 0

C. K. Gunsalus is director of the National Center for Professional and Research Ethics at the University of Illinois Urbana–Champaign.

You can also search for this author in PubMed   Google Scholar

During decades as a research-integrity officer, expert witness for misconduct investigations and consultant, I have been inspired — and I have seen inexcusable conduct. Even when investigations are exemplary and findings clear, universities rarely report them publicly. That secrecy perpetuates misbehaviour and breeds mistrust — as evidenced by the ongoing revelations of universities that failed to respond appropriately, sometimes for years, to allegations of sexual misconduct.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Nature 570 , 7 (2019)

doi: https://doi.org/10.1038/d41586-019-01728-z

Reprints and permissions

Related Articles

article on research misconduct

  • Research management
  • Institutions

Algorithm ranks peer reviewers by reputation — but critics warn of bias

Algorithm ranks peer reviewers by reputation — but critics warn of bias

Nature Index 25 APR 24

Researchers want a ‘nutrition label’ for academic-paper facts

Researchers want a ‘nutrition label’ for academic-paper facts

Nature Index 17 APR 24

How young people benefit from Swiss apprenticeships

How young people benefit from Swiss apprenticeships

Spotlight 17 APR 24

CERN’s impact goes way beyond tiny particles

CERN’s impact goes way beyond tiny particles

Exclusive: official investigation reveals how superconductivity physicist faked blockbuster results

Exclusive: official investigation reveals how superconductivity physicist faked blockbuster results

News 06 APR 24

Larger or longer grants unlikely to push senior scientists towards high-risk, high-reward work

Larger or longer grants unlikely to push senior scientists towards high-risk, high-reward work

Nature Index 25 MAR 24

Breaking ice, and helicopter drops: winning photos of working scientists

Breaking ice, and helicopter drops: winning photos of working scientists

Career Feature 23 APR 24

Londoners see what a scientist looks like up close in 50 photographs

Londoners see what a scientist looks like up close in 50 photographs

Career News 18 APR 24

Deadly diseases and inflatable suits: how I found my niche in virology research

Deadly diseases and inflatable suits: how I found my niche in virology research

Postdoctoral Associate- Computational Spatial Biology

Houston, Texas (US)

Baylor College of Medicine (BCM)

article on research misconduct

Staff Scientist - Genetics and Genomics

Technician - senior technician in cell and molecular biology.

APPLICATION CLOSING DATE: 24.05.2024 Human Technopole (HT) is a distinguished life science research institute founded and supported by the Italian ...

Human Technopole

article on research misconduct

Postdoctoral Fellow

The Dubal Laboratory of Neuroscience and Aging at the University of California, San Francisco (UCSF) seeks postdoctoral fellows to investigate the ...

San Francisco, California

University of California, San Francsico

article on research misconduct

Postdoctoral Associate

article on research misconduct

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Personal tools

Help

  • Delegations
  • Invoked Standards
  • Directives Process
  • Definitions

RevCom Logo

Research Misconduct

The fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. Fabrication is making up data or results and recording or reporting them. Falsification is manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record. Plagiarism is the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit. Research misconduct does not include honest error or differences of opinion. 10 CFR 733.3 (see also, Federal Policy on Research Misconduct; Preamble for Research Misconduct Policy; 65 FR 7626).

  • DOE P 411.2B, DOE Scientific Integrity Policy Dated Jan 19, 2024 Status Current

Document Actions

  • Web Policies
  • Accessibility/Section 508
  • Learn about DOE's Vulnerability Disclosure Program

COMMENTS

  1. Explanations of Research Misconduct, and How They Hang Together

    In a helpful article, Benjamin Sovacool distinguishes three 'narratives' about research misconduct: one in terms of (1) impure individuals, another in terms of the (2) failures of this-or-that particular university or research institute, and yet another in terms of (3) the corrupting structure of the practice of modern science as such—three narratives that he suggests are incommensurable.

  2. Should research misconduct be criminalized?

    Retraction of flawed work is a major mechanism of science self-correction. Yet, not all authors found guilty of research misconduct have articles retracted (Drimer-Batca et al., 2019).Data show that although there is an increasing number of retracted biomedical and life-science papers—67% of which are attributable to misconduct (Fang et al., 2012) —only 39 scientists from 7 countries have ...

  3. A review of the current concerns about misconduct in medical sciences

    Thus, to help out meeting this challenge, this article begins with giving the definition and more recent types of research misconduct at the current, followed by providing a complete explanation on both reasons and consequences through remarkable examples and lastly offering possible solutions by addressing globally accepted guidelines. As far ...

  4. Addressing Research Misconduct and Detrimental Research Practices

    Synopsis:Research misconduct and detrimental research practices are addressed in several ways. Addressing misconduct and detrimental research practices through the implementation of standards and best practices, such as effective mentoring at the lab level, requirements for data and code sharing at the disciplinary level, and implementation of greater transparency in reporting results, can ...

  5. Retractions are part of science, but misconduct isn't

    Research misconduct is hugely detrimental to science and to society. Defined as "fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting ...

  6. Research integrity is much more than misconduct

    Research misconduct encompasses fraud, fabrication and plagiarism. It is essential to deal with such dishonesty thoroughly and fairly, but it's patching up a tear after the damage is done ...

  7. Incidence and Consequences

    Synopsis:Research misconduct and detrimental research practices constitute serious threats to science in the United States and around the world. The incidence of research misconduct is tracked by official statistics, survey results, and analysis of retractions, and all of these indicators have shown increases over time. However, as there are no definitive data, it is difficult to say precisely ...

  8. 'Gagged and blindsided': how an allegation of research misconduct

    C. K. Gunsalus, a research-integrity specialist who had no involvement with Sasisekharan's case, says that regulations typically recommend a 120-day timescale for misconduct investigations, but ...

  9. Leading the charge to address research misconduct

    Research led by Daniele Fanelli, PhD, a fellow in quantitative methodology at the London School of Economics and Political Science, found that a culture of transparency and communication, paired with strong rules around misconduct, is associated with fewer misconduct-related retractions (PLOS ONE, Vol. 10, No. 6, 2015).

  10. A scoping review of the literature featuring research ethics and

    The areas of Research Ethics (RE) and Research Integrity (RI) are rapidly evolving. Cases of research misconduct, other transgressions related to RE and RI, and forms of ethically questionable behaviors have been frequently published. The objective of this scoping review was to collect RE and RI cases, analyze their main characteristics, and discuss how these cases are represented in the ...

  11. Deceiving scientific research, misconduct events are possibly a more

    Background Today, scientists and academic researchers experience an enormous pressure to publish innovative and ground-breaking results in prestigious journals. This pressure may blight the general view concept of how scientific research needs to be done in terms of the general rules of transparency; duplication of data, and co-authorship rights might be compromised. As such, misconduct acts ...

  12. Research misconduct in health and life sciences research: A ...

    Considering authors affiliated with Brazilian institutions, this review concluded that most of the retractions of articles in health and life sciences were retracted for research misconduct. Journals, funders, academic institutions, and researchers have an important educational and surveillance role to play in preventing research misconduct.

  13. Scientific Misconduct and Medical Journals

    Although not much is known about the prevalence of scientific misconduct, several studies with limited methods have estimated that the prevalence of scientists who have been involved in scientific misconduct ranges from 1% to 2%. 4 - 6 During the last 5 years, JAMA and the JAMA Network journals have published 12 notices of Retraction about 15 ...

  14. Journal editors and publishers' legal obligations with respect to

    That review identified 42 articles and found the prevalence of admitted research misconduct was 2.9% among scientists, and the prevalence of QRPs was 12.5% (Xie et al., 2021: 6). The prevalence of witnessed research misconduct was 15.5%, and witnessed QRPs was 39.7% (Xie et al., 2021: 14).

  15. On the Willingness to Report and the Consequences of Reporting Research

    While attention to research integrity has been growing over the past decades, the processes of signalling and denouncing cases of research misconduct remain largely unstudied. In this article, we develop a theoretically and empirically informed understanding of the causes and consequences of reporting research misconduct in terms of power relations. We study the reporting process based on a ...

  16. Scientific Misconduct: A Global Concern

    What is Scientific Misconduct? "Research misconduct is defined as fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results" [].Another definition of research misconduct is given as, "as any behavior by a researcher, whether intentional or not, that fails to scrupulously respect high scientific and ethical standards."

  17. Beyond the traditional: Extending academic libraries' roles in research

    The causes of research misconduct and their relationships can be used as evidence for academic libraries to develop research integrity services. The literature analysis pointed to 21 causes of research misconduct in four themes: individual, organizational, professional, and cultural causes.

  18. Nine pitfalls of research misconduct

    C.K.G. coined the mnemonic TRAGEDIES (Temptation, Rationalization, Ambition, Group and authority pressure, Entitlement, Deception, Incrementalism, Embarrassment and Stupid systems) to capture the ...

  19. (PDF) Scientific research misconducts : An overview

    Abstract. Research misconduct is defined as fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results (Anderson, 2013; Breen, 2016 ...

  20. Research Misconduct News, Articles

    Chief Academic Officer Accused in Ongoing Research Scandal at UCL. New allegations of fraud committed under the watch of geneticist David Latchman were made last year. The latest news and opinions in research misconduct from The Scientist, the life science researcher's most trusted source of information.

  21. Research misconduct in China: towards an institutional analysis

    Research misconduct in Chinese academia has gained international attention following the retraction of a significant number of academic articles from English- and Chinese-language journals (Chen et al., 2018; Wang et al., 2023).According to an analysis in the journal Nature, Hindawi, one of the world's largest publishers of peer-reviewed, fully open access journals, in 2023 'issued more ...

  22. Research Misconduct: Reasons and Types of Research Misconduct

    Research misconduct can dilute research, lead to misinterpretations, and erode trust in science. (Image by pvproductions on Freepik) Science and is built on the foundation of integrity and trust, and questionable research practices or research misconduct is counter-productive to the production and use of scientific knowledge.

  23. Exclusive: investigators found plagiarism and data falsification in

    In Garofalo's case, a committee found 11 cases of research misconduct — 7 concerning plagiarism and 4 image falsification — in 8 papers published while she was in Croce's laboratory (of ...

  24. Research Misconduct

    Research Misconduct - Definitions Definitions Research misconduct is defined as fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results, according to 42 CFR Part 93 .

  25. Full article: Our cheating is not your cheating: signature misconduct

    Reluctant students in these classes have low intrinsic motivation and some anxiety about mathematics, resulting in a dislike that Anderman and Sungjun (Citation 2019) related to misconduct. An educational research network of mathematicians studying learning and teaching in undergraduate mathematics developed independently to the emergence of ...

  26. RECR Workshop: Research Misconduct and Detrimental Research Practices

    Conduct in research spans a spectrum, from that performed with integrity to true research misconduct. Accusations of misconduct are incredibly serious, and the consequences grave. This session will help you understand the spectrum of behaviors that can arise while performing research and scholarly activities. Examples of misconduct and detrimental research practices will be presented. There ...

  27. Fraud and misconduct in clinical research: A concern

    Fraud and misconduct are the two terminologies often used interchangeably. However, there is a gross distinction between the two. Scientific misconduct/fraud is a violation of the standard codes of scholarly conduct and ethical behavior in scientific research. Definition of fraud as defined in court is "the knowing breach of the standard of ...

  28. Make reports of research misconduct public

    Make reports of research misconduct public. Confronted with bad behaviour, institutions will keep asking the wrong questions until they have to show their working, says C. K. Gunsalus. During ...

  29. Research Misconduct

    Research Misconduct. The fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. Fabrication is making up data or results and recording or reporting them. Falsification is manipulating research materials, equipment, or processes, or changing or omitting data or results such ...