helpful professor logo

35 Media Bias Examples for Students

media bias example types definition

Media bias examples include ideological bias, gotcha journalism, negativity bias, and sensationalism. Real-life situations when they occur include when ski resorts spin snow reports to make them sound better, and when cable news shows like Fox and MSNBC overtly prefer one political party over another (republican and democrat, respectively).

No one is free of all bias. No one is perfectly objective. So, every book, research paper, and article (including this one) is bound to have some form of bias.

The media is capable of employing an array of techniques to modify news stories in favor of particular interests or groups.

While bias is usually seen as a bad thing, and good media outlets try to minimize it as much as possible, at times, it can also be seen as a good thing. For example, a reporter’s bias toward scholarly consensus or a local paper’s bias toward reporting on events relevant to local people makes sense.

Media Bias Definition

Media bias refers to the inherently subjective processes involved in the selection and curation of information presented within media. It can lead to incorrect, inaccurate, incomplete, misleading, misrepresented, or otherwise skewed reporting.

Media bias cannot be fully eliminated. This is because media neutrality has practical limitations, such as the near impossibility of reporting every single available story and fact, the requirement that selected facts must form a coherent narrative, and so on (Newton, 1996).

Types of Media Bias

In a broad sense, there are two main types of media bias . 

  • Ideological bias reflects a news outlet’s desire to move the opinions of readers in a particular direction.
  • Spin bias reflects a news outlet’s attempt to create a memorable story (Mullainathan & Shleifer, 2002).

These two main types can be divided into many subcategories. The following list offers a more specific classification of different types of media bias:

  • Advertising bias occurs when stories are selected or slanted to please advertisers (Eberl et al., 2018).
  • Concision bias occurs when conciseness determines which stories are reported and which are ignored. News outlets often report views that can be summarized succinctly, thereby overshadowing views that are more unconventional, difficult to explain, and complex.
  • Confirmation bias occurs when media consumers tend to believe those stories, views, and research that confirms their current views and ignore everything else (Groseclose & Milyo, 2005).
  • Content bias occurs when two political parties are treated differently and news is biased towards one side (Entman, 2007).
  • Coverage bias occurs when the media chooses to report only negative news about one party or ideology (Eberl et al., 2017 & D’Alessio & Allen, 2000)
  • Decision-making bias occurs when the motivations, beliefs, and intentions of the journalists have an impact on what they write and how (Entman, 2007).
  • Demographic bias occurs when demographic factors, such as race, gender, social status, income, and so on are allowed to influence reporting (Ribeiro et al., 2018).
  • Gatekeeping bias occurs when stories are selected or dismissed on ideological grounds (D’Alessio & Allen, 2000). This is sometimes also referred to as agenda bias , selectivity bias (Hofstetter & Buss, 1978), or selection bias (Groeling, 2013). Such bias is often focused on political actors (Brandenburg, 2006).
  • Layout bias occurs when an article is placed in a section that is less read so that it becomes less important, or when an article is placed first so that more people read it. This can sometimes be called burying the lead .
  • Mainstream bias occurs when a news outlet only reports things that are safe to report and everyone else is reporting. By extension, the news outlet ignores stories and views that might offend the majority.
  • Partisan bias occurs when a news outlet tends to report in a way that serves a specific political party (Haselmayer et al., 2017).
  • Sensationalism bias occurs when the exceptional, the exciting, and the sensational are given more attention because it is rarer.
  • Statement bias occurs when media coverage is slanted in favor of or against specific actors or issues (D’Alessio & Allen, 2000). It is also known as tonality bias (Eberl et al., 2017) or presentation bias (Groeling, 2013).
  • Structural bias occurs when an actor or issue receives more or less favorable coverage as a result of newsworthiness instead of ideological decisions (Haselmayer et al., 2019 & van Dalen, 2012).
  • Distance bias occurs when a news agency gives more coverage to events physically closer to the news agency than elsewhere. For example, national media organizations like NBC may be unconsciously biased toward New York City news because that is where they’re located.
  • Negativity bias occurs because negative information tends to attract more attention and is remembered for a longer time, even if it’s disliked in the moment.
  • False balance bias occurs when a news agency attempts to appear balanced by presenting a news story as if the data is 50/50 on the topic, while the data may in fact show one perspective should objectively hold more weight. Climate change is the classic example.

Media Bias Examples

  • Ski resorts reporting on snowfall: Ski resorts are biased in how they spin snowfall reporting. They consistently report higher snowfall than official forecasts because they have a supply-driven interest in doing so (Raymond & Taylor, 2021).
  • Moral panic in the UK: Cohen (1964) famously explored UK media’s sensationalist reporting about youth subcultural groups as “delinquents”, causing panic among the general population that wasn’t representative of the subcultural groups’ true actions or impact on society.
  • Murdoch media in Australia: Former Prime Minister Kevin Rudd consistently reports on media bias in the Murdoch media, highlighting for example, that Murdoch’s papers have endorsed the conservative side of politics (ironically called the Liberals) in 24 out of 24 elections.
  • Fox and MSNBC: In the United States, Fox and MSNBC have niched down to report from a right- and left-wing bias, respectively.
  • Fog of war: During wartime, national news outlets tend to engage in overt bias against the enemy by reporting extensively on their war crimes while failing to report on their own war crimes.
  • Missing white woman syndrome: Sensationalism bias is evident in cases such as missing woman Gabby Petito . The argument of this type of bias is that media tends only to report on missing women when they are white, and neglect to make as much of a fuss about missing Indigenous women.
  • First-World Bias in Reporting on Natural Disasters: Scholars have found that news outlets tend to have bias toward reporting on first-world nations that have suffered natural disasters while under-reporting on natural disasters in developing nations, where they’re seen as not newsworthy (Aritenang, 2022; Berlemann & Thomas, 2018).
  • Overseas Reporting on US Politics: Sensationalism bias has an effect when non-US nations report on US politics. Unlike other nations’ politics, US politics is heavily reported worldwide. One major reason is that US politics tends to be bitterly fought and lends itself to sensational headlines.
  • Click baiting: Media outlets that have moved to a predominantly online focus, such as Forbes and Vice, are biased toward news reports that can be summed up by a sensational headline to ensure they get clicked – this is called “click baiting”.
  • Google rankings and mainstream research bias: Google has explicitly put in its site quality rater guidelines a preference for sites that report in ways that reflect “expert consensus”. While this may be seen as a positive way to use bias, it can also push potentially valid alternative perspectives and whistleblowers off the front page of search results.
  • False Balance on climate change: Researchers at Northwestern University have highlighted the prevalence of false balance reporting on climate change. They argue that 99% of scientists agree that it is man-made, yet often, news segments have one scientist arguing one side and another arguing another, giving the reporting a perception that it’s a 50-50 split in the scientific debate. In their estimation, an unbiased report would demonstrate the overwhelming amount of scientific evidence supporting one side over the other.
  • Negative Unemployment Reports: Garz found that media tend to over-report negative unemployment statistics while under-reporting when unemployment statistics are positive (Garz, 2013).
  • Gotcha Journalism: Gotcha journalism involves having journalists go out and actively seek out “gotcha questions” that will lead to sensational headlines. It is a form of bias because it often leads to less reporting on substantive messaging and an over-emphasis on gaffes and disingenuous characterizations of politicians.
  • Citizenship bias: When a disaster happens overseas, reporting often presents the number deceased, followed by the number from the news outlet’s company. For example, they might say: “51 dead, including 4 Americans.” This bias, of course, is to try to make the news appear more relevant to their audience, but nonetheless shows a bias toward the audience’s in-group.
  • Online indie media bias: Online indie media groups that have shot up on YouTube and social media often have overt biases. Left-wing versions include The Young Turks and The David Pakman Show , while right-wing versions include The Daily Wire and Charlie Kirk .
  • Western alienation: In Canada, this phenomenon refers to ostensibly national media outlets like The Globe and Mail having a bias toward news occurring in Toronto and ignoring western provinces, leading to “western alienation”.

The Government’s Role in Media Bias

Governments also play an important role in media bias due to their ability to distribute power.

The most obvious examples of pro-government media bias can be seen in totalitarian regimes, such as modern-day North Korea (Merloe, 2015). The government and the media can influence each other: the media can influence politicians and vice versa (Entman, 2007).

Nevertheless, even liberal democratic governments can affect media bias by, for example, leaking stories to their favored outlets and selectively calling upon their preferred outlets during news conferences.

In addition to the government, the market can also influence media coverage. Bias can be the function of who owns the media outlet in question, who are the media staff, what is the intended audience, what gets the most clicks or sells the most newspapers, and so on. 

Media bias refers to the bias of journalists and news outlets in reporting events, views, stories, and everything else they might cover.

The term usually denotes a widespread bias rather than something specific to one journalist or article.

There are many types of media bias. It is useful to understand the different types of biases, but also recognize that while good reporting can and does exist, it’s almost impossible to fully eliminate biases in reporting.

Aritenang, A. (2022). Understanding international agenda using media analytics: The case of disaster news coverage in Indonesia.  Cogent Arts & Humanities ,  9 (1), 2108200.

Brandenburg, H. (2006). Party Strategy and Media Bias: A Quantitative Analysis of the 2005 UK Election Campaign. Journal of Elections, Public Opinion and Parties , 16 (2), 157–178. https://doi.org/10.1080/13689880600716027

D’Alessio, D., & Allen, M. (2000). Media Bias in Presidential Elections: A Meta-Analysis. Journal of Communication , 50 (4), 133–156. https://doi.org/10.1111/j.1460-2466.2000.tb02866.x

Eberl, J.-M., Boomgaarden, H. G., & Wagner, M. (2017). One Bias Fits All? Three Types of Media Bias and Their Effects on Party Preferences. Communication Research , 44 (8), 1125–1148. https://doi.org/10.1177/0093650215614364

Eberl, J.-M., Wagner, M., & Boomgaarden, H. G. (2018). Party Advertising in Newspapers. Journalism Studies , 19 (6), 782–802. https://doi.org/10.1080/1461670X.2016.1234356

Entman, R. M. (2007). Framing Bias: Media in the Distribution of Power. Journal of Communication , 57 (1), 163–173. https://doi.org/10.1111/j.1460-2466.2006.00336.x

Garz, M. (2014). Good news and bad news: evidence of media bias in unemployment reports.  Public Choice ,  161 (3), 499-515.

Groeling, T. (2013). Media Bias by the Numbers: Challenges and Opportunities in the Empirical Study of Partisan News. Annual Review of Political Science , 16 (1), 129–151. https://doi.org/10.1146/annurev-polisci-040811-115123

Groseclose, T., & Milyo, J. (2005). A measure of media bias. The Quarterly Journal of Economics , 120 (4), 1191-1237.

Groseclose, T., & Milyo, J. (2005). A Measure of Media Bias. The Quarterly Journal of Economics , 120 (4), 1191–1237. https://doi.org/10.1162/003355305775097542

Haselmayer, M., Meyer, T. M., & Wagner, M. (2019). Fighting for attention: Media coverage of negative campaign messages. Party Politics , 25 (3), 412–423. https://doi.org/10.1177/1354068817724174

Haselmayer, M., Wagner, M., & Meyer, T. M. (2017). Partisan Bias in Message Selection: Media Gatekeeping of Party Press Releases. Political Communication , 34 (3), 367–384. https://doi.org/10.1080/10584609.2016.1265619

Hofstetter, C. R., & Buss, T. F. (1978). Bias in television news coverage of political events: A methodological analysis. Journal of Broadcasting , 22 (4), 517–530. https://doi.org/10.1080/08838157809363907

Mackey, T. P., & Jacobson, T. E. (2019). Metaliterate Learning for the Post-Truth World . American Library Association.

Merloe, P. (2015). Authoritarianism Goes Global: Election Monitoring Vs. Disinformation. Journal of Democracy , 26 (3), 79–93. https://doi.org/10.1353/jod.2015.0053

Mullainathan, S., & Shleifer, A. (2002). Media Bias (No. w9295; p. w9295). National Bureau of Economic Research. https://doi.org/10.3386/w9295

Newton, K. (1996). The mass media and modern government . Wissenschaftszentrum Berlin für Sozialforschung.

Raymond, C., & Taylor, S. (2021). “Tell all the truth, but tell it slant”: Documenting media bias. Journal of Economic Behavior & Organization , 184 , 670–691. https://doi.org/10.1016/j.jebo.2020.09.021

Ribeiro, F. N., Henrique, L., Benevenuto, F., Chakraborty, A., Kulshrestha, J., Babaei, M., & Gummadi, K. P. (2018, June). Media bias monitor: Quantifying biases of social media news outlets at large-scale. In Twelfth international AAAI conference on web and social media .

Sloan, W. D., & Mackay, J. B. (2007). Media Bias: Finding It, Fixing It . McFarland.

van Dalen, A. (2012). Structural Bias in Cross-National Perspective: How Political Systems and Journalism Cultures Influence Government Dominance in the News. The International Journal of Press/Politics , 17 (1), 32–55. https://doi.org/10.1177/1940161211411087

Chris

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 5 Top Tips for Succeeding at University
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 50 Durable Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 100 Consumer Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 30 Globalization Pros and Cons

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

  • Ethics & Leadership
  • Fact-Checking
  • Media Literacy
  • The Craig Newmark Center
  • Reporting & Editing
  • Ethics & Trust
  • Tech & Tools
  • Business & Work
  • Educators & Students
  • Training Catalog
  • Custom Teaching
  • For ACES Members
  • All Categories
  • Broadcast & Visual Journalism
  • Fact-Checking & Media Literacy
  • In-newsroom
  • Memphis, Tenn.
  • Minneapolis, Minn.
  • St. Petersburg, Fla.
  • Washington, D.C.
  • Poynter ACES Introductory Certificate in Editing
  • Poynter ACES Intermediate Certificate in Editing
  • Ethics & Trust Articles
  • Get Ethics Advice
  • Fact-Checking Articles
  • International Fact-Checking Day
  • Teen Fact-Checking Network
  • International
  • Media Literacy Training
  • MediaWise Resources
  • Ambassadors
  • MediaWise in the News

Support responsible news and fact-based information today!

Should you trust media bias charts?

These controversial charts claim to show the political lean and credibility of news organizations. here’s what you need to know about them..

essay on bias in media

Impartial journalism is an impossible ideal. That is, at least, according to Julie Mastrine.

“Unbiased news doesn’t exist. Everyone has a bias: everyday people and journalists. And that’s OK,” Mastrine said. But it’s not OK for news organizations to hide those biases, she said.

“We can be manipulated into (a biased outlet’s) point of view and not able to evaluate it critically and objectively and understand where it’s coming from,” said Mastrine, marketing director for AllSides , a media literacy company focused on “freeing people from filter bubbles.”

That’s why she created a media bias chart.

As readers hurl claims of hidden bias towards outlets on all parts of the political spectrum, bias charts have emerged as a tool to reveal pernicious partiality.

Charts that use transparent methodologies to score political bias — particularly the AllSides chart and another from news literacy company Ad Fontes Media — are increasing in popularity and spreading across the internet. According to CrowdTangle, a social media monitoring platform, the homepages for these two sites and the pages for their charts have been shared tens of thousands of times.

But just because something is widely shared doesn’t mean it’s accurate. Are media bias charts reliable?

Why do media bias charts exist?

Traditional journalism values a focus on news reporting that is fair and impartial, guided by principles like truth, verification and accuracy. But those standards are not observed across the board in the “news” content that people consume.

Tim Groeling, a communications professor at the University of California Los Angeles, said some consumers take too much of the “news” they encounter as impartial.

When people are influenced by undisclosed political bias in the news they consume, “that’s pretty bad for democratic politics, pretty bad for our country to have people be consistently misinformed and think they’re informed,” Groeling said.

If undisclosed bias threatens to mislead some news consumers, it also pushes others away, he said.

“When you have bias that’s not acknowledged, but is present, that’s really damaging to trust,” he said.

Kelly McBride, an expert on journalism ethics and standards, NPR’s public editor and the chair of the Craig Newmark Center for Ethics and Leadership at Poynter, agrees.

“If a news consumer doesn’t see their particular bias in a story accounted for — not necessarily validated, but at least accounted for in a story — they are going to assume that the reporter or the publication is biased,” McBride said.

The growing public confusion about whether or not news outlets harbor a political bias, disclosed or not, is fueling demand for resources to sort fact from otherwise — resources like these media bias charts.

Bias and social media

Mastrine said the threat of undisclosed biases grows as social media algorithms create filter bubbles to feed users ideologically consistent content.

Could rating bias help? Mastrine and Vanessa Otero, founder of the Ad Fontes media bias chart, think so.

“It’ll actually make it easier for people to identify different perspectives and make sure they’re reading across the spectrum so that they get a balanced understanding of current events,” Mastrine said.

Otero said bias ratings could also be helpful to advertisers.

“There’s this whole ecosystem of online junk news, of polarizing misinformation, these clickbaity sites that are sucking up a lot of ad revenue. And that’s not to the benefit of anybody,” Otero said. “It’s not to the benefit of the advertisers. It’s not to the benefit of society. It’s just to the benefit of some folks who want to take advantage of people’s worst inclinations online.”

Reliable media bias ratings could allow advertisers to disinvest in fringe sites.

Groeling, the UCLA professor, said he could see major social media and search platforms using bias ratings to alter the algorithms that determine what content users see. Changes could elevate neutral content or foster broader news consumption.

But he fears the platforms’ sweeping power, especially after Facebook and Twitter censored a New York Post article purporting to show data from a laptop belonging to Hunter Biden, the son of President-elect Joe Biden. Groeling said social media platforms failed to clearly communicate how and why they stopped and slowed the spread of the article.

“(Social media platforms are) searching for some sort of arbiter of truth and news … but it’s actually really difficult to do that and not be a frightening totalitarian,” he said.

Is less more?

The Ad Fontes chart and the AllSides chart are each easy to understand: progressive publishers on one side, conservative ones on the other.

“It’s just more visible, more shareable. We think more people can see the ratings this way and kind of begin to understand them and really start to think, ‘Oh, you know, journalism is supposed to be objective and balanced,’” Mastrine said. AllSides has rated media bias since 2012. Mastrine first put them into chart form in early 2019.

Otero recognizes that accessibility comes at a price.

“Some nuance has to go away when it’s a graphic,” she said. “If you always keep it to, ‘people can only understand if they have a very deep conversation,’ then some people are just never going to get there. So it is a tool to help people have a shortcut.”

But perceiving the chart as distilled truth could give consumers an undue trust in outlets, McBride said.

“Overreliance on a chart like this is going to probably give some consumers a false level of faith,” she said. “I can think of a massive journalistic failure for just about every organization on this chart. And they didn’t all come clean about it.”

The necessity of getting people to look at the chart poses another challenge. Groeling thinks disinterest among consumers could hurt the charts’ usefulness.

“Asking people to go to this chart, asking them to take effort to understand and do that comparison, I worry would not actually be something people would do. Because most people don’t care enough about news,” he said. He would rather see a plugin that detects bias in users’ overall news consumption and offers them differing viewpoints.

McBride questioned whether bias should be the focus of the charts at all. Other factors — accountability, reliability and resources — would offer better insight into what sources of news are best, she said.

“Bias is only one thing that you need to pay attention to when you consume news. What you also want to pay attention to is the quality of the actual reporting and writing and the editing,” she said. It wouldn’t make sense to rate local news sources for bias, she added, because they are responsive to individual communities with different political ideologies.

The charts are only as good as their methodologies. Both McBride and Groeling shared praise for the stated methods for rating bias of AllSides and Ad Fontes , which can be found on their websites. Neither Ad Fontes nor AllSides explicitly rates editorial standards.

The AllSides Chart

essay on bias in media

(Courtesy: AllSides)

The AllSides chart focuses solely on political bias. It places sources in one of five boxes — “Left,” “Lean Left,” “Center,” “Lean Right” and “Right.” Mastrine said that while the boxes allow the chart to be easily understood, they also don’t allow sources to be rated on a gradient.

“Our five-point scale is inherently limited in the sense that we have to put somebody in a category when, in reality, it’s kind of a spectrum. They might fall in between two of the ratings,” Mastrine said.

That also makes the chart particularly easy to understand, she said.

AllSides has rated more than 800 sources in eight years, focusing on online content only. Ratings are derived from a mix of review methods.

In the blind bias survey, which Mastrine called “one of (AllSides’) most robust bias rating methodologies,” readers from the public rate articles for political bias. Two AllSides staffers with different political biases pull articles from the news sites that are being reviewed. AllSides locates these unpaid readers through its newsletter, website, social media account and other marketing tools. The readers, who self-report their political bias after they use a bias rating test provided by the company, only see the article’s text and are not told which outlet published the piece. The data is then normalized to more closely reflect the composure of America across political groupings.

AllSides also uses “editorial reviews,” where staff members look directly at a source to contribute to ratings.

“That allows us to actually look at the homepage with the branding, with the photos and all that and kind of get a feel for what the bias is, taking all that into account,” Mastrine said.

She added that an equal number of staffers who lean left, right and center conduct each review together. The personal biases of AllSides’ staffers appear on their bio pages . Mastrine leans right.

She clarified that among the 20-person staff, many are part time, 14% are people of color, 38% are lean left or left, 29% are center, and 18% are lean right or right. Half of the staffers are male, half are female.

When a news outlet receives a blind bias survey and an editorial review, both are taken into account. Mastrine said the two methods aren’t weighted together “in any mathematical way,” but said they typically hold roughly equal weight. Sometimes, she added, the editorial review carries more weight.

AllSides also uses “independent research,” which Mastrine described as the “lowest level of bias verification.” She said it consists of staffers reviewing and reporting on a source to make a preliminary bias assessment. Sometimes third-party analyses — including academic research and surveys — are incorporated into ratings, too.

AllSides highlights the specific methodologies used to judge each source on its website and states its confidence in the ratings based on the methods used. In a separate white paper , the company details the process used for its August 2020 blind bias survey.

AllSides sometimes gives separate ratings to different sections of the same source. For example, it rates The New York Times’ opinion section “Left” and its news section “Lean Left.” AllSides also incorporates reader feedback into its system. People can mark that they agree or disagree with AllSides’ rating of a source. When a significant number of people disagree, AllSides often revisits a source to vet it once again, Mastrine said.

The AllSides chart generally gets good reviews, she said, and most people mark that they agree with the ratings. Still, she sees one misconception among the people that encounter it: They think center means better. Mastrine disagrees.

“The center outlets might be omitting certain stories that are important to people. They might not even be accurate,” she said. “We tell people to read across the spectrum.”

To make that easier, AllSides offers a curated “ balanced news feed ,” featuring articles from across the political spectrum, on its website.

AllSides makes money through paid memberships, one-time donations, media literacy training and online advertisements. It plans to become a public benefit corporation by the end of the year, she added, meaning it will operate both for profit and for a stated public mission.

The Ad Fontes chart

essay on bias in media

(Courtesy: Ad Fontes)

The Ad Fontes chart rates both reliability and political bias. It scores news sources — around 270 now, and an expected 300 in December — using bias and reliability as coordinates on its chart.

The outlets appear on a spectrum, with seven markers showing a range from “Most Extreme Left” to “Most Extreme Right” along the bias axis, and eight markers showing a range from “Original Fact Reporting” to “Contains Inaccurate/Fabricated Info” along the reliability axis.

The chart is a departure from its first version, back when founder Vanessa Otero , a patent attorney, said she put together a chart by herself as a hobby after seeing Facebook friends fight over the legitimacy of sources during the 2016 election. Otero said that when she saw how popular her chart was, she decided to make bias ratings her full-time job and founded Ad Fontes — Latin for “to the source” — in 2018.

“There were so many thousands of people reaching out to me on the internet about this,” she said. “Teachers were using it in their classrooms as a tool for teaching media literacy. Publishers wanted to publish it in textbooks.”

About 30 paid analysts rate articles for Ad Fontes. Listed on the company’s website , they represent a range of experience — current and former journalists, educators, librarians and similar professionals. The company recruits analysts through its email list and references and vets them through a traditional application process. Hired analysts are then trained by Otero and other Ad Fontes staff.

To start review sessions, a group of coordinators composed of senior analysts and the company’s nine staffers pulls articles from the sites being reviewed. They look for articles listed as most popular or displayed most prominently.

essay on bias in media

Part of the Ad Fontes analyst political bias test. The test asks analysts to rank their political bias on 18 different policy issues.

Ad Fontes administers an internal political bias test to analysts, asking them to rank their left-to-right position on about 20 policy positions. That information allows the company to attempt to create ideological balance by including one centrist, one left-leaning and one right-leaning analyst on each review panel. The panels review at least three articles for each source, but they may review as many as 30 for particularly prominent outlets, like The Washington Post, Otero said. More on their methodology, including how they choose which articles to review to create a bias rating, can be found here on the Ad Fontes website.

When they review the articles, the analysts see them as they appear online, “because that’s how people encounter all content. No one encounters content blind,” Otero said. The review process recently changed so that paired analysts discuss their ratings over video chat, where they are pushed to be more specific as they form ratings, Otero said.

Individual scores for an article’s accuracy, the use of fact or opinion, and the appropriateness of its headline and image combine to create a reliability score. The bias score is determined by the article’s degree of advocacy for a left-to-right political position, topic selection and omission, and use of language.

To create an overall bias and reliability score for an outlet, the individual scores for each reviewed article are averaged, with added importance given to more popular articles. That average determines where sources show up on the chart.

Ad Fontes details its ratings process in a white paper from August 2019.

While the company mostly reviews prominent legacy news sources and other popular news sites, Otero hopes to add more podcasts and video content to the chart in coming iterations. The chart already rates video news channel “ The Young Turks ” (which claims to be the most popular online news show with 250 million views per month and 5 million subscribers on YouTube ), and Otero mentioned she next wants to examine videos from Prager University (which claims 4 billion lifetime views for its content, has 2.84 million subscribers on YouTube and 1.4 million followers on Instagram ). Ad Fontes is working with ad agency Oxford Road and dental care company Quip to create ratings for the top 50 news and politics podcasts on Apple Podcasts, Otero said.

“It’s not strictly traditional news sources, because so much of the information that people use to make decisions in their lives is not exactly news,” Otero said.

She was shocked when academic textbook publishers first wanted to use her chart. Now she wants it to become a household tool.

“As we add more news sources on to it, as we add more data, I envision this becoming a standard framework for evaluating news on at least these two dimensions of reliability and bias,” she said.

She sees complaints about it from both ends of the political spectrum as proof that it works.

“A lot of people love it and a lot of people hate it,” Otero said. “A lot of people on the left will call us neoliberal shills, and then a bunch of people that are on the right are like, ‘Oh, you guys are a bunch of leftists yourselves.’”

The project has grown to include tools for teaching media literacy to school kids and an interactive version of the chart that displays each rated article. Otero’s company operates as a public benefit corporation with a stated public benefit mission: “to make news consumers smarter and news media better.” She didn’t want Ad Fontes to rely on donations.

“If we want to grow with a problem, we have to be a sustainable business. Otherwise, we’re just going to make a small difference in a corner of the problem,” she said.

Ad Fontes makes money by responding to specific research requests from advertisers, academics and other parties that want certain outlets to be reviewed. The company also receives non-deductible donations and operates on WeFunder , a grassroots crowdfunding investment site, to bring in investors. So far, Ad Fontes has raised $163,940 with 276 investors through the site.

Should you use the charts?

Media bias charts with transparent, rigorous methodologies can offer insight into sources’ biases. That insight can help you understand what perspectives sources bring as they share the news. That insight also might help you understand what perspectives you might be missing as a news consumer.

But use them with caution. Political bias isn’t the only thing news consumers should look out for. Reliability is critical, too, and the accuracy and editorial standards of organizations play an important role in sharing informative, useful news.

Media bias charts are a media literacy tool. They offer well-researched appraisals on the bias of certain sources. But to best inform yourself, you need a full toolbox. Check out Poynter’s MediaWise project for more media literacy tools.

This article was originally published on Dec. 14, 2020. 

More about media bias charts

  • A media bias chart update puts The New York Times in a peculiar position
  • Letter to the editor: What Poynter’s critique misses about the Media Bias Chart

essay on bias in media

Opinion | What is the best sports documentary of all time?

The five-part, 467-minute ‘O.J.: Made in America’ doc is a masterpiece. I urge you to watch it.

essay on bias in media

Fact-checking Donald Trump’s Mar-a-Lago press conference with Mike Johnson

In a televised press conference after Trump’s and Johnson’s remarks April 12, Trump made several false or misleading comments.

essay on bias in media

Opinion | O.J. Simpson, whose murder trial reshaped the media, dies at 76

Simpson’s trial lured a nation to its TVs, launched a network, created enduring ethics case studies and led to numerous career breakouts.

essay on bias in media

A fact-checker’s guide to Trump’s first criminal trial: business records, hush money and a gag order

Trump faces 34 counts of falsifying business records to cover up a payment to adult film actor Stormy Daniels.

essay on bias in media

Grant applications now open to support reporting on transgender issues

The Gill Foundation has partnered with Poynter’s Beat Academy to train local journalists to serve as accurate, authoritative voices 

Comments are closed.

We are too obsessed with alleged bias and objectivity, which so often is in the biased eye of the beholder. The main standard of good journalism should be verifiable factual accuracy.

Hoping to see a follow-up article about whether we can trust fact checker report card charts created by collecting a fact checker’s subjective ratings.

As a writer for Wonkette, I won’t claim to be objective, but we do like to point out that our rating at Ad Fontes – both farthest to the left and the least reliable, is absurd. Apparently we can’t be trusted at all because we do satirical commentary instead of straight news.

When we’ve attempted to point out to Ms. Otero that we adhere to high standards when it comes to factuality, but we also make jokes, she has replied that satire is inherently untrustworthy and biased, particularly since we sometimes use dirty words.

That seems to us a remarkably biased definition of bias.

Start your day informed and inspired.

Get the Poynter newsletter that's right for you.

AllSides

Balanced News is the Future

Become an Angel Investor, Early Bird Terms closing soon

AllSides Bias Meter Full

TRUST, restored. NEWS, balanced. DEMOCRACY, strengthened.

AllSides Logo

  • Bias Ratings
  • Media Bias Chart
  • Fact Check Bias
  • Rate Your Bias
  • Types of Bias

How to Spot 16 Types of Media Bias

Journalism is tied to a set of ethical standards and values, including truth and accuracy, fairness and impartiality, and accountability. However, journalism today often strays from objective fact, resulting in biased news and endless examples of media bias.

Media bias isn't necessarily a bad thing. But hidden bias misleads, manipulates and divides us. This is why AllSides provides hundreds of media bias ratings , a balanced newsfeed , the AllSides Media Bias Chart™ , and the AllSides Fact Check Bias Chart™ .

72 percent of Americans believe traditional news sources report fake news , falsehoods, or content that is purposely misleading. With trust in media declining, media consumers must learn how to spot different types of media bias.

This page outlines 16 types of media bias, along with examples of the different types of bias being used in popular media outlets. Download this page as a PDF .

Related: 14 Types of Ideological Bias

16 Types of Media Bias and how to spot them

  • Unsubstantiated Claims
  • Opinion Statements Presented as Facts
  • Sensationalism/Emotionalism
  • Mudslinging/Ad Hominem
  • Mind Reading
  • Flawed Logic
  • Bias by Omission
  • Omission of Source Attribution
  • Bias by Story Choice and Placement
  • Subjective Qualifying Adjectives
  • Word Choice
  • Negativity Bias
  • Elite v. Populist Bias

Some Final Notes on Bias

Spin is a type of media bias that means vague, dramatic or sensational language. When journalists put a “spin” on a story, they stray from objective, measurable facts. Spin is a form of media bias that clouds a reader’s view, preventing them from getting a precise take on what happened.

In the early 20th century, Public Relations and Advertising executives were referred to as “spin doctors.” They would use vague language and make unsupportable claims in order to promote a product, service or idea, downplaying any alternative views in order to make a sale. Increasingly, these tactics are appearing in journalism.

Examples of Spin Words and Phrases:

  • High-stakes
  • Latest in a string of...
  • Turn up the heat
  • Stern talks
  • Facing calls to...
  • Even though
  • Significant

Sometimes the media uses spin words and phrases to imply bad behavior . These words are often used without providing hard facts, direct quotes, or witnessed behavior:

  • Acknowledged
  • Refusing to say
  • Came to light

To stir emotions, reports often include colored, dramatic, or sensational words as a substitute for the word “said.” For example:

  • Frustration

Examples of Spin Media Bias:

essay on bias in media

“Gloat” means “contemplate or dwell on one's own success or another's misfortune with smugness or malignant pleasure.” Is there evidence in Trump’s tweet to show he is being smug or taking pleasure in the layoffs, or is this a subjective interpretation?

Source article

Business Insider Bias Rating

essay on bias in media

In this example of spin media bias, the Washington Post uses a variety of dramatic, sensationalist words to spin the story to make Trump appear emotional and unhinged. They also refer to the president's "vanity" without providing supporting evidence.

Washington Post Bias Rating

Top of Page

2. Unsubstantiated Claims

Journalists sometimes make claims in their reporting without including evidence to back them up. This can occur in the headline of an article, or in the body.

Statements that appear to be fact, but do not include specific evidence, are a key indication of this type of media bias.

Sometimes, websites or media outlets publish stories that are totally made up. This is often referred to as a type of fake news .

Examples of Unsubstantiated Claims Media Bias

essay on bias in media

In this media bias instance, The Daily Wire references a "longstanding pattern," but does not back this up with evidence.

The Daily Wire Bias Rating

essay on bias in media

In late January 2019, actor Jussie Smollett claimed he was attacked by two men who hurled racial and homophobic slurs. The Hill refers to “the violent attack” without using the word “alleged” or “allegations." The incident was revealed to be a hoax created by Smollett himself.

The Hill Bias Rating

essay on bias in media

This Washington Post columnist makes a claim about wealth distribution without noting where it came from. Who determined this number and how?

3. Opinion Statements Presented as Fact

Sometimes journalists use subjective language or statements under the guise of reporting objectively. Even when a media outlet presents an article as a factual and objective news piece, it may employ subjective statements or language.

A subjective statement is one that is based on personal opinions, assumptions, beliefs, tastes, preferences, or interpretations. It reflects how the writer views reality, what they presuppose to be the truth. It is a statement colored by their specific perspective or lens and cannot be verified using concrete facts and figures within the article.

There are objective modifiers — “blue” “old” “single-handedly” “statistically” “domestic” — for which the meaning can be verified. On the other hand, there are subjective modifiers — “suspicious,” “dangerous,” “extreme,” “dismissively,” “apparently” — which are a matter of interpretation.

Interpretation can present the same events as two very different incidents. For instance, a political protest in which people sat down in the middle of a street blocking traffic to draw attention to their cause can be described as “peaceful” and “productive,” or, others may describe it as “aggressive” and “disruptive.”

Examples of Words Signaling Subjective statements :

  • Good/Better/Best
  • Is considered to be
  • May mean that
  • Bad/Worse/Worst
  • It's likely that

Source: Butte College Critical Thinking Tipsheet

An objective statement, on the other hand, is an observation of observable facts . It is not based on emotions or personal opinion and is based on empirical evidence — what is quantifiable and measurable.

It’s important to note that an objective statement may not actually be true. The following statements are objective statements, but can be verified as true or false:

Taipei 101 is the world's tallest building. Five plus four equals ten. There are nine planets in our solar system. Now, the first statement of fact is true (as of this writing); the other two are false. It is possible to verify the height of buildings and determine that Taipei 101 tops them all. It is possible to devise an experiment to demonstrate that five plus four does not equal ten or to use established criteria to determine whether Pluto is a planet.

Editorial reviews by AllSides found that some media outlets blur the line between subjective statements and objective statements, leading to potential confusion for readers, in two key ways that fall under this type of media bias :

  • Including subjective statements in their writing and not attributing them to a source. (see Omission of Source Attribution )
  • Placing opinion or editorial content on the homepage next to hard news, or otherwise not clearly marking opinion content as “opinion.”

Explore logical fallacies that are often used by opinion writers.

Examples of Opinion Statements Presented as Fact

essay on bias in media

The sub-headline Vox uses is an opinion statement — some people likely believe the lifting of the gas limit will strengthen the coal industry — but Vox included this statement in a piece not labeled “Opinion.”

Vox Bias Rating

essay on bias in media

In this article about Twitter CEO Elon Musk banning reporters, we can detect that the journalist is providing their personal opinion that Musk is making "arbitary" decisions by making note of the word "seemingly." Whether or not Musk's decisions are arbitrary is a matter of personal opinion and should be reserved for the opinion pages.

SFGate Rating

essay on bias in media

In this article about Hillary Clinton’s appearance on "The Late Show With Stephen Colbert," the author makes an assumption about Clinton’s motives and jumps to a subjective conclusion.

Fox News Bias Rating

4. Sensationalism/Emotionalism

Sensationalism is a type of media bias in which information is presented in a way that gives a shock or makes a deep impression. Often it gives readers a false sense of culmination, that all previous reporting has led to this ultimate story.

Sensationalist language is often dramatic, yet vague. It often involves hyperbole — at the expense of accuracy — or warping reality to mislead or provoke a strong reaction in the reader.

In recent years, some media outlets have been criticized for overusing the term “breaking” or “breaking news,” which historically was reserved for stories of deep impact or wide-scale importance.

With this type of media bias, reporters often increase the readability of their pieces using vivid verbs. But there are many verbs that are heavy with implications that can’t be objectively corroborated: “blast” “slam” “bury” “abuse” “destroy” “worry.”

Examples of Words and Phrases Used by the Media that Signal Sensationalism and Emotionalism:

  • Embroiled in...
  • Torrent of tweets

Examples of Sensationalism/Emotionalism Media Bias

essay on bias in media

“Gawk” means to stare or gape stupidly. Does AP’s language treat this event as serious and diplomatic, or as entertainment?

AP Bias Rating

essay on bias in media

Here, BBC uses sensationalism in the form of hyperbole, as the election is unlikely to involve bloodshed in the literal sense.

BBC Bias Rating

essay on bias in media

In this piece from the New York Post, the author uses multiple sensationalist phrases and emotional language to dramatize the “Twitter battle."

New York Post Bias Rating

5. Mudslinging/Ad Hominem

Mudslinging is a type of media bias when unfair or insulting things are said about someone in order to damage their reputation. Similarly, ad hominem (Latin for “to the person”) attacks are attacks on a person’s motive or character traits instead of the content of their argument or idea. Ad hominem attacks can be used overtly, or as a way to subtly discredit someone without having to engage with their argument.

Examples of Mudslinging

essay on bias in media

A Reason editor calls a New York Times columnist a "snowflake" after the columnist emailed a professor and his provost to complain about a tweet calling him a bedbug.

Reason Bias Rating

essay on bias in media

In March 2019, The Economist ran a piece describing political commentator and author Ben Shapiro as “alt-right.” Readers pointed out that Shapiro is Jewish (the alt-right is largely anti-Semitic) and has condemned the alt-right. The Economist issued a retraction and instead referred to Shapiro as a “radical conservative.”

Source: The Economist Twitter

6. Mind Reading

Mind reading is a type of media bias that occurs in journalism when a writer assumes they know what another person thinks, or thinks that the way they see the world reflects the way the world really is.

Examples of Mind Reading

essay on bias in media

We can’t objectively measure that Trump hates looking foolish, because we can’t read his mind or know what he is feeling. There is also no evidence provided to demonstrate that Democrats believe they have a winning hand.

CNN Bias Rating

essay on bias in media

How do we know that Obama doesn’t have passion or sense of purpose? Here, the National Review writer assumes they know what is going on in Obama’s head.

National Review Bias Rating

essay on bias in media

Vox is upfront about the fact that they are interpreting what Neeson said. Yet this interpretation ran in a piece labeled objective news — not a piece in the Opinion section. Despite being overt about interpreting, by drifting away from what Neeson actually said, Vox is mind reading.

Slant is a type of media bias that describes when journalists tell only part of a story, or when they highlight, focus on, or play up one particular angle or piece of information. It can include cherry-picking information or data to support one side, or ignoring another perspective. Slant prevents readers from getting the full story, and narrows the scope of our understanding.

Examples of Slant

essay on bias in media

In the above example, Fox News notes that Rep. Alexandria Ocasio-Cortez’s policy proposals have received “intense criticism.” While this is true, it is only one side of the picture, as the Green New Deal was received well by other groups.

essay on bias in media

Here, Snopes does not indicate or investigate why police made sweeps (did they have evidence criminal activity was occurring in the complex?), nor did Snopes ask police for their justification, giving a one-sided view. In addition, the studies pointed to only show Black Americans are more likely to be arrested for drug possession, not all crimes.

Snopes Bias Rating

8. Flawed Logic

Flawed logic or faulty reasoning is a way to misrepresent people’s opinions or to arrive at conclusions that are not justified by the given evidence. Flawed logic can involve jumping to conclusions or arriving at a conclusion that doesn’t follow from the premise.

Examples of Flawed Logic

essay on bias in media

Here, the Daily Wire interprets a video to draw conclusions that aren’t clearly supported by the available evidence. The video shows Melania did not extend her hand to shake, but it could be because Clinton was too far away to reach, or perhaps there was no particular reason at all. By jumping to conclusions that this amounted to a “snub” or was the result of “bitterness” instead of limitations of physical reality or some other reason, The Daily Wire is engaging in flawed logic.

9. Bias by Omission

Bias by omission is a type of media bias in which media outlets choose not to cover certain stories, omit information that would support an alternative viewpoint, or omit voices and perspectives on the other side.

Media outlets sometimes omit stories in order to serve a political agenda. Sometimes, a story will only be covered by media outlets on a certain side of the political spectrum. Bias by omission also occurs when a reporter does not interview both sides of a story — for instance, interviewing only supporters of a bill, and not including perspectives against it.

Examples of Media Bias by Omission

essay on bias in media

In a piece titled, "Hate crimes are rising, regardless of Jussie Smollett's case. Here's why," CNN claims that hate crime incidents rose for three years, but omits information that may lead the reader to different conclusions. According to the FBI’s website , reports of hate crime incidents rose from previous years, but so did the number of agencies reporting, “with approximately 1,000 additional agencies contributing information.” This makes it unclear whether hate crimes are actually on the rise, as the headline claims, or simply appear to be because more agencies are reporting.

10. Omission of Source Attribution

Omission of source attribution is when a journalist does not back up their claims by linking to the source of that information. An informative, balanced article should provide the background or context of a story, including naming sources (publishing “on-the-record” information).

For example, journalists will often mention "baseless claims," "debunked theories," or note someone "incorrectly stated" something without including background information or linking to another article that would reveal how they concluded the statement is false or debunked. Or, reporters will write that “immigration opponents say," "critics say," or “supporters of the bill noted” without identifying who these sources are.

It is sometimes useful or necessary to use anonymous sources, because insider information is only available if the reporter agrees to keep their identity secret. But responsible journalists should be aware and make it clear that they are offering second-hand information on sensitive matters. This fact doesn’t necessarily make the statements false, but it does make them less than reliable.

Examples of Media Bias by Omission of Source Attribution

essay on bias in media

In this paragraph, The New York Times says Trump "falsely claimed" millions had voted illegally; they link to Trump's tweet, but not to a source of information that would allow the reader to determine Trump's claim is false.

The New York Times Bias Rating

essay on bias in media

In this paragraph, the Epoch Times repeatedly states "critics say" without attributing the views to anyone specific.

The Epoch Times Bias Rating

essay on bias in media

In a piece about the Mueller investigation, The New York Times never names the investigators, officials or associates mentioned.

11. Bias by Story Choice and Placement

Story choice, as well as story and viewpoint placement, can reveal media bias by showing which stories or viewpoints the editor finds most important.

Bias by story choice is when a media outlet's bias is revealed by which stories the outlet chooses to cover or to omit. For example, an outlet that chooses to cover the topic of climate change frequently can reveal a different political leaning than an outlet that chooses to cover stories about gun laws. The implication is that the outlet's editors and writers find certain topics more notable, meaningful, or important than others, which can tune us into the outlet's political bias or partisan agenda. Bias by story choice is closely linked to media bias by omission and slant .

Bias by story placement is one type of bias by placement. The stories that a media outlet features "above the fold" or prominently on its homepage and in print show which stories they really want you to read, even if you read nothing else on the site or in the publication. Many people will quickly scan a homepage or read only a headline, so the stories that are featured first can reveal what the editor hopes you take away or keep top of mind from that day.

Bias by viewpoint placement is a related type of bias by placement. This can often be seen in political stories. A balanced piece of journalism will include perspectives from both the left and the right in equal measure. If a story only features viewpoints from left-leaning sources and commentators, or includes them near the top of the story/in the first few paragraphs, and does not include right-leaning viewpoints, or buries them at the end of a story, this is an example of bias by viewpoint.

Examples of Media Bias by Placement

essay on bias in media

In this screenshot of ThinkProgress' homepage taken at 1 p.m. ET on Sept. 6, 2019, the media outlet chooses to prominently display coverage of LGBT issues and cuts to welfare and schools programs. In the next screenshot of The Epoch Times homepage taken at the same time on the same day, the outlet privileges very different stories.

essay on bias in media

Taken at the same time on the same day as the screenshot above, The Epoch Times chooses to prominently feature stories about a hurricane, the arrest of illegal immigrants , Hong Kong activists, and the building of the border wall. Notice that ThinkProgress' headline on the border wall focuses on diverting funds from schools and day cares, while the Epoch Times headline focuses on the wall's completion.

12. Subjective Qualifying Adjectives

Journalists can reveal bias when they include subjective, qualifying adjectives in front of specific words or phrases. Qualifying adjectives are words that characterize or attribute specific properties to a noun. When a journalist uses qualifying adjectives, they are suggesting a way for you to think about or interpret the issue, instead of just giving you the facts and letting you make judgements for yourself. This can manipulate your view. Subjective qualifiers are closely related to spin words and phrases , because they obscure the objective truth and insert subjectivity.

For example, a journalist who writes that a politician made a "serious allegation" is interpreting the weight of that allegation for you. An unbiased piece of writing would simply tell you what the allegation is, and allow you to make your own judgement call as to whether it is serious or not.

In opinion pieces, subjective adjectives are okay; they become a problem when they are inserted outside of the opinion pages and into hard news pieces.

Sometimes, the use of an adjective may be warranted, but journalists have to be careful in exercising their judgement. For instance, it may be warranted to call a Supreme Court ruling that overturned a major law a "landmark case." But often, adjectives are included in ways that not everyone may agree with; for instance, people who are in favor of limiting abortion would likely not agree with a journalist who characterizes new laws restricting the act as a "disturbing trend." Therefore, it's important to be aware of subjective qualifiers and adjectives so that you can be on alert and then decide for yourself whether it should be accepted or not. It is important to notice, question and challenge adjectives that journalists use.

Examples of Subjective Qualifying Adjectives

  • disturbing rise
  • serious accusations
  • troubling trend
  • sinister warning
  • awkward flaw
  • extreme law
  • baseless claim
  • debunked theory ( this phrase could coincide with bias by omission , if the journalist doesn't include information for you to determine why the theory is false. )
  • critical bill
  • offensive statement
  • harsh rebuke
  • extremist group
  • far-right/far-left organization

essay on bias in media

HuffPost's headline includes the phrases "sinister warning" and "extremist Republican." It goes on to note the politician's "wild rant" in a "frothy interview" and calls a competing network "far-right." These qualifying adjectives encourage the reader to think a certain way. A more neutral piece would have told the reader what Cawthorn said without telling the reader how to interpret it.

HuffPost bias rating

13. Word Choice

Words and phrases are loaded with political implications. The words or phrases a media outlet uses can reveal their perspective or ideology.

Liberals and conservatives often strongly disagree about the best way to describe hot-button issues. For example, a liberal journalist who favors abortion access may call it " reproductive healthcare ," or refer to supporters as " pro-choice ." Meanwhile, a conservative journalist would likely not use these terms — to them, this language softens an immoral or unjustifiable act. Instead, they may call people who favor abortion access " pro-abortion " rather than " pro-choice ."

Word choice can also reveal how journalists see the very same event very differently. For instance, one journalist may call an incident of civil unrest a " racial justice protest " to focus the readers' attention on the protesters' policy angles and advocacy; meanwhile, another journalist calls it a " riot " to focus readers' attention on looting and property destruction that occurred.

Words and their meanings are often shifting in the political landscape. The very same words and phrases can mean different things to different people. AllSides offers a Red Blue Translator to help readers understand how people on the left and right think and feel differently about the same words and phrases.

Examples of Polarizing Word Choices

  • pro-choice | anti-choice
  • pro-abortion | anti-abortion
  • gun rights | gun control
  • riot | protest
  • illegal immigrants | migrants
  • illegal alien | asylum-seeking migrants
  • woman | birthing person
  • voting rights | voting security
  • sex reassignment surgery | gender-affirming care
  • critical race theory | anti-racist education

Examples of Word Choice Bias

essay on bias in media

An outlet on the left calls Florida's controversial Parental Rights in Education law the "Don't Say Gay" bill, using language favored by opponents, while an outlet on the right calls the same bill the "FL education bill," signaling a supportive view.

USA Today source article

USA TODAY media bias rating

Fox News source article

Fox News media bias rating

14. Photo Bias

Photos can be used to shape the perception, emotions or takeaway a reader will have regarding a person or event. Sometimes a photo can give a hostile or favorable impression of the subject.

For example, a media outlet may use a photo of an event or rally that was taken at the very beginning of the event to give the impression that attendance was low. Or, they may only publish photos of conflict or a police presence at an event to make it seem violent and chaotic. Reporters may choose an image of a favored politician looking strong, determined or stately during a speech; if they disfavor him, they may choose a photo of him appearing to yell or look troubled during the same speech.

Examples of Photo Bias

essay on bias in media

Obama appears stern or angry — with his hand raised, brows furrowed, and mouth wide, it looks like maybe he’s yelling. The implication is that the news about the Obamacare ruling is something that would enrage Obama.

The Blaze bias rating

essay on bias in media

With a tense mouth, shifty eyes and head cocked to one side, Nunes looks guilty. The sensationalism in the headline aids in giving this impression (“neck-deep” in “scandal.”)

Mother Jones bias rating

essay on bias in media

With his lips pursed and eyes darting to the side, Schiff looks guilty in this photo. The headline stating that he “got caught celebrating” also implies that he was doing something he shouldn’t be doing. Whether or not he was actually celebrating impeachment at this dinner is up for debate, but if you judged Townhall’s article by the photo, you may conclude he was.

Townhall bias rating

essay on bias in media

With his arms outreached and supporters cheering, Texas Gov. Greg Abbott appears triumphant in this photo. The article explains that a pediatric hospital in Texas announced it will stop performing “ gender-confirming therapies ” for children, following a directive from Abbott for the state to investigate whether such procedures on kids constituted child abuse. The implication of the headline and photo is that this is a victory.

The Daily Wire bias rating

15. Negativity Bias

Negativity bias refers to a type of bias in which reporters emphasize bad or negative news, or frame events in a negative light.

"If it bleeds, it leads" is a common media adage referring to negativity bias. Stories about death, violence, turmoil, struggle, and hardship tend to get spotlighted in the press, because these types of stories tend to get more attention and elicit more shock, outrage, fear, and cause us to become glued to the news, wanting to hear more.

Examples of Negativity Bias

essay on bias in media

This story frames labor force participation as a negative thing. However, if labor force participation remained low for a long time, that would also be written up as bad news.

New York Times bias rating

16. Elite v. Populist Bias

Elite bias is when journalists defer to the beliefs, viewpoints, and perspectives of people who are part of society's most prestigious, credentialed institutions — such as academic institutions, government agencies, business executives, or nonprofit organizations. Populist bias, on the other hand, is a bias in which the journalist defers to the perspectives, beliefs, or viewpoints of those who are outside of or dissent from prestigious institutions — such as "man on the street" stories, small business owners, less prestigious institutions, and people who live outside of major urban centers.

Elite/populist bias has a geographic component in the U.S. Because major institutions of power are concentrated in American coastal cities (which tend to vote blue), there can exist conflicting values, perspectives, and ideologies among “coastal elites” and “rural/middle America" (which tends to vote red). The extent to which journalists emphasize the perspectives of urbanites versus people living in small town/rural areas can show elite or populist bias, and thus, political bias.

Examples of Elite v. Populist Bias

essay on bias in media

Elite Bias: This article emphasizes the guidance and perspectives of major government agencies and professors at elite universities.

NBC News bias rating

essay on bias in media

Populist Bias: In this opinion piece, journalist Naomi Wolf pushes back against elite government agencies, saying they can't be trusted.

The Epoch Times bias rating

Everyone is biased. It is part of human nature to have perspectives, preferences, and prejudices. But sometimes, bias — especially media bias — can become invisible to us. This is why AllSides provides hundreds of media bias ratings and a media bias chart.

We are all biased toward things that show us in the right. We are biased toward information that confirms our existing beliefs. We are biased toward the people or information that supports us, makes us look good, and affirms our judgements and virtues. And we are biased toward the more moral choice of action — at least, that which seems moral to us.

Journalism as a profession is biased toward vibrant communication, timeliness, and providing audiences with a sense of the current moment — whether or not that sense is politically slanted. Editors are biased toward strong narrative, stunning photographs, pithy quotes, and powerful prose. Every aspiring journalist has encountered media bias — sometimes the hard way. If they stay in the profession, often it will be because they have incorporated the biases of their editor.

But sometimes, bias can manipulate and blind us. It can put important information and perspectives in the shadows and prevent us from getting the whole view. For this reason, there is not a single type of media bias that can’t, and shouldn’t occasionally, be isolated and examined. This is just as true for journalists as it is for their audiences.

Good reporting can shed valuable light on our biases — good and bad. By learning how to spot media bias, how it works, and how it might blind us, we can avoid being fooled by media bias and fake news . We can learn to identify and appreciate different perspectives — and ultimately, come to a more wholesome view.

Julie Mastrine | Director of Marketing and Media Bias Ratings, AllSides

Early Contributors and Editors (2018)

Jeff Nilsson | Saturday Evening Post

Sara Alhariri | Stossel TV

Kristine Sowers | Abridge News

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Elizabeth Morrissette, Grace McKeon, Alison Louie, Amy Luther, and Alexis Fagen

Media bias could be defined as the unjust favoritism and reporting of a certain ideas or standpoint. In the news, social media, and entertainment, such as movies or television, we see media bias through the information these forms of media choose to pay attention to or report (“How to Detect Bias in News Media”, 2012). We could use the example of the difference between FOX news and CNN because these two news broadcasters have very different audiences, they tend to be biased to what the report and how they report it due to democratic or republican viewpoints.

Bias, in general, is the prejudice or preconceived notion against a person, group or thing. Bias leads to stereotyping which we can see on the way certain things are reported in the news. As an example, during Hurricane Katrina, there were two sets of photos taken of two people wading through water with bags of food. The people, one white and one black, were reported about but the way they were reported about was different. For the black man, he was reported “looting” a grocery store, while the white person was reported “finding food for survival”.  The report showed media bias because they made the black man seem like he was doing something wrong, while the white person was just “finding things in order to survive” (Guarino, 2015).

Commercial media is affected by bias because a corporation can influence what kind of entertainment is being produced. When there is an investment involved or money at stake, companies tend to want to protect their investment by not touching on topics that could start a controversy (Pavlik, 2018). In order to be able to understand what biased news is, we must be media literate. To be media literate, we need to adopt the idea that news isn’t completely transparent in the stories they choose to report. Having the knowledge that we can’t believe everything we read or see on the news will allow us as a society to become a more educated audience (Campbell, 2005).

Bias in the News

The news, whether we like it or not, is bias. Some news is bias towards Republicans while other news outlets are biased towards Democrats. It’s important to understand this when watching or reading the news to be media literate. This can be tricky because journalists may believe that their reporting is written with “fairness and balance” but most times there is an underlying bias based around what news provider the story is being written for (Pavlik and McIntosh, 61). With events happening so rapidly, journalist write quickly and sometimes point fingers without trying to. This is called Agenda-Setting which is defined by Shirley Biagi as, how reporters neglect to tell people what to think, but do tell them what and who to talk about (Biagi, 268).

The pressure to put out articles quickly, often times, can affect the story as well. How an event is portrayed, without all the facts and viewpoints, can allow the scene to be laid out in a way that frames it differently than it may have happened (Biagi, 269). However, by simply watching or reading only one portrayal of an event people will often blindly believe it is true, without see or reading other stories that may shine a different light on the subject (Vivian, 4). Media Impact   defines this as Magic Bullet Theory or the assertion that media messages directly and measurably affect people’s behavior (Biagi, 269). The stress of tight time deadlines also affects the number of variations of a story. Journalist push to get stories out creates a lack of deeper consideration to news stories. This is called Consensus Journalism or the tendency among journalists covering the same topic to report similar articles instead of differing interpretations of the event (Biagi, 268).

To see past media bias in the news it’s important to be media literate. Looking past any possible framing, or bias viewpoints and getting all the facts to create your own interpretation of a news story. It doesn’t hurt to read both sides of the story before blindly following what someone is saying, taking into consideration who they might be biased towards.

Stereotypes in the Media

Bias is not only in the news, but other entertainment media outlets such as TV and movies. Beginning during childhood, our perception of the world starts to form. Our own opinions and views are created as we learn to think for ourselves. The process of this “thinking for ourselves” is called socialization. One key agent of socialization is the mass media. Mass media portrays ideas and images that at such a young age, are very influential. However, the influence that the media has on us is not always positive. Specifically, the entertainment media, plays a big role in spreading stereotypes so much that they become normal to us (Pavlik and McIntosh, 55).

The stereotypes in entertainment media may be either gender stereotypes or cultural stereotypes. Gender stereotypes reinforce the way people see each gender supposed to be like. For example, a female stereotype could be a teenage girl who likes to go shopping, or a stay at home mom who cleans the house and goes grocery shopping. Men and women are shown in different ways in commercials, TV and movies. Women are shown as domestic housewives, and men are shown as having high status jobs, and participating in more outdoor activities (Davis, 411). A very common gender stereotype for women is that they like to shop, and are not smart enough to have a high-status profession such as a lawyer or doctor. An example of this stereotype can be shown in the musical/movie, Legally Blonde. The main character is woman who is doubted by her male counterparts. She must prove herself to be intelligent enough to become a lawyer. Another example of a gender stereotype is that men like to use tools and drive cars. For example, in most tool and car commercials /advertisements, a man is shown using the product.  On the other hand, women are most always seen in commercials for cleaning supplies or products like soaps. This stems the common stereotype that women are stay at home moms and take on duties such as cleaning the house, doing the dishes, doing the laundry, etc.

Racial stereotyping is also quite common in the entertainment media. The mass media helps to reproduce racial stereotypes, and spread those ideologies (Abraham, 184). For example, in movies and TV, the minority characters are shown as their respective stereotypes. In one specific example, the media “manifests bias and prejudice in representations of African Americans” (Abraham, 184). African Americans in the media are portrayed in negative ways. In the news, African Americans are often portrayed to be linked to negative issues such as crime, drug use, and poverty (Abraham 184). Another example of racial stereotyping is Kevin Gnapoor in the popular movie, Mean Girls . His character is Indian, and happens to be a math enthusiast and member of the Mathletes. This example strongly proves how entertainment media uses stereotypes.

Types of Media Bias

Throughout media, we see many different types of bias being used. These is bias by omission, bias by selection of source, bias by story selection, bias by placement, and bias by labeling. All of these different types are used in different ways to prevent the consumer from getting all of the information.

  • Bias by omission:  Bias by omission is when the reporter leaves out one side of the argument, restricting the information that the consumer receives. This is most prevalent when dealing with political stories (Dugger) and happens by either leaving out claims from either the liberal or conservative sides. This can be seen in either one story or a continuation of stories over time (Media Bias). There are ways to avoid this type of bias, these would include reading or viewing different sources to ensure that you are getting all of the information.
  • Bias by selection of sources:  Bias by selection of sources occurs when the author includes multiple sources that only have to do with one side (Baker).  Also, this can occur when the author intentionally leaves out sources that are pertinent to the other side of the story (Dugger). This type of bias also utilizes language such as “experts believe” and “observers say” to make people believe that what they are reading is credible. Also, the use of expert opinions is seen but only from one side, creating a barrier between one side of the story and the consumers (Baker).
  • Bias by story selection: The second type of bias by selection is bias by story selection. This is seen more throughout an entire corporation, rather than through few stories. This occurs when news broadcasters only choose to include stories that support the overall belief of the corporation in their broadcasts. This means ignoring all stories that would sway people to the other side (Baker).  Normally the stories that are selected will fully support either the left-wing or right-wing way of thinking.
  • Bias by placement: Bias by placement is a big problem in today’s society. We are seeing this type of bias more and more because it is easy with all of the different ways media is presented now, either through social media or just online. This type of bias shows how important a particular story is to the reporter. Editors will choose to poorly place stories that they don’t think are as important, or that they don’t want to be as easily accessible. This placement is used to downplay their importance and make consumers think they aren’t as important (Baker).
  • Bias by labeling: Bias by labeling is a more complicated type of bias mostly used to falsely describe politicians. Many reporters will tag politicians with extreme labels on one side of an argument while saying nothing about the other side (Media Bias). These labels that are given can either be a good thing or a bad thing, depending on the side they are biased towards. Some reporters will falsely label people as “experts”, giving them authority that they have not earned and in turn do not deserve (Media Bias). This type of bias can also come when a reporter fails to properly label a politician, such as not labeling a conservative as a conservative (Dugger). This can be difficult to pick out because not all labeling is biased, but when stronger labels are used it is important to check different sources to see if the information is correct.

Bias in Entertainment

Bias is an opinion in favor or against a person, group, and or thing compared to another, and are presented, in such ways to favor false results that are in line with their prejudgments and political or practical commitments (Hammersley & Gomm, 1).  Media bias in the entertainment is the bias from journalists and the news within the mass media about stories and events reported and the coverage of them.

There are biases in most entertainment today, such as, the news, movies, and television. The three most common biases formed in entertainment are political, racial, and gender biases. Political bias is when part of the entertainment throws in a political comment into a movie or TV show in hopes to change or detriment the viewers political views (Murillo, 462). Racial bias is, for example, is when African Americans are portrayed in a negative way and are shown in situations that have to do with things such as crime, drug use, and poverty (Mitchell, 621). Gender biases typically have to do with females. Gender biases have to do with roles that some people play and how others view them (Martin, 665). For example, young girls are supposed to be into the color pink and like princess and dolls. Women are usually the ones seen on cleaning commercials. Women are seen as “dainty” and “fragile.” And for men, they are usually seen on the more “masculine types of media, such as things that have to do with cars, and tools.

Bias is always present, and it can be found in all outlets of media. There are so many different types of bias that are present, whether it is found in is found in the news, entertainment industry, or in the portrayal of stereotypes bias, is all around us. To be media literate it’s important to always be aware of this, and to read more than one article, allowing yourself to come up with conclusion; thinking for yourself.

Works Cited 

Abraham, Linus, and Osei Appiah. “Framing News Stories: The Role of Visual Imagery in Priming Racial Stereotypes.”  Howard Journal of Communications , vol. 17, no. 3, 2006, pp. 183–203.

Baker, Brent H. “Media Bias.”  Student News Daily , 2017.

Biagi, Shirley. “Changing Messages.”  Media/Impact; An Introduction to Mass Media , 10th ed., Cengage Learning, 2013, pp. 268-270.

Campbell, Richard, et al.  Media & Culture: an Introduction to Mass Communication . Bedford/St Martins, 2005.

Davis, Shannon N. “Sex Stereotypes In Commercials Targeted Toward Children: A Content Analysis.”  Sociological Spectrum , vol. 23, no. 4, 2003, pp. 407–424.

Dugger, Ashley. “Media Bias and Criticism .” http://study.com/academy/lesson/media-bias-criticism-definition-types-examples.html .

Guarino, Mark. “Misleading reports of lawlessness after Katrina worsened crisis, officials say.”   The Guardian , 16 Aug. 2015, http://www.theguardian.com/us-news/2015/aug/16/hurricane-katrina-new-orleans-looting-violence-misleading-reports .

Hammersley, Martyn, and Roger Gomm. Bias in Social Research . Vol. 2, ser. 1, Sociological Research Online, 1997.

“How to Detect Bias in News Media.”  FAIR , 19 Nov. 2012, http://fair.org/take-action-now/media-activism-kit/how-to-detect-bias-in-news-media/ .

Levasseur, David G. “Media Bias.”  Encyclopedia of Political Communication , Lynda Lee Kaid, editor, Sage Publications, 1st edition, 2008. Credo Reference, https://search.credoreference.com/content/entry/sagepolcom/media_bias/0 .

Martin, Patricia Yancey, John R. Reynolds, and Shelley Keith, “Gender Bias and Feminist Consciousness among Judges and Attorneys: A Standpoint Theory Analysis,” Signs: Journal of Women in Culture and Society 27, no. 3 (Spring 2002): 665-701,

Mitchell, T. L., Haw, R. M., Pfeifer, J. E., & Meissner, C. A. (2005). “Racial Bias in Mock Juror Decision-Making: A Meta-Analytic Review of Defendant Treatment.” Law and Human Behavior , 29(6), 621-637.

Murillo, M. (2002). “Political Bias in Policy Convergence: Privatization Choices in Latin America.” World Politics , 54(4), 462-493.

Pavlik, John V., and Shawn McIntosh. “Media Literacy in the Digital Age .”  Converging Media: a New Introduction to Mass Communication , Oxford University Press, 2017.

Vivian, John. “Media Literacy .”  The Media of Mass Communication , 8th ed., Pearson, 2017, pp. 4–5.

Introduction to Media Studies Copyright © by Elizabeth Morrissette, Grace McKeon, Alison Louie, Amy Luther, and Alexis Fagen is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

June 21, 2018

Biases Make People Vulnerable to Misinformation Spread by Social Media

Researchers have developed tools to study the cognitive, societal and algorithmic biases that help fake news spread

By Giovanni Luca Ciampaglia , Filippo Menczer & The Conversation US

essay on bias in media

Roy Scott Getty Images

The following essay is reprinted with permission from The Conversation , an online publication covering the latest research.

Social media are among the  primary sources of news in the U.S.  and across the world. Yet users are exposed to content of questionable accuracy, including  conspiracy theories ,  clickbait ,  hyperpartisan content ,  pseudo science  and even  fabricated “fake news” reports .

It’s not surprising that there’s so much disinformation published: Spam and online fraud  are lucrative for criminals , and government and political propaganda yield  both partisan and financial benefits . But the fact that  low-credibility content spreads so quickly and easily  suggests that people and the algorithms behind social media platforms are vulnerable to manipulation.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Explaining the tools developed at the Observatory on Social Media.

Our research has identified three types of bias that make the social media ecosystem vulnerable to both intentional and accidental misinformation. That is why our  Observatory on Social Media  at Indiana University is building  tools  to help people become aware of these biases and protect themselves from outside influences designed to exploit them.

Bias in the brain

Cognitive biases originate in the way the brain processes the information that every person encounters every day. The brain can deal with only a finite amount of information, and too many incoming stimuli can cause  information overload . That in itself has serious implications for the quality of information on social media. We have found that steep competition for users’ limited attention means that  some ideas go viral despite their low quality —even when people prefer to share high-quality content.*

To avoid getting overwhelmed, the brain uses a  number of tricks . These methods are usually effective, but may also  become biases  when applied in the wrong contexts.

One cognitive shortcut happens when a person is deciding whether to share a story that appears on their social media feed. People are  very affected by the emotional connotations of a headline , even though that’s not a good indicator of an article’s accuracy. Much more important is  who wrote the piece .

To counter this bias, and help people pay more attention to the source of a claim before sharing it, we developed  Fakey , a mobile news literacy game (free on  Android  and  iOS ) simulating a typical social media news feed, with a mix of news articles from mainstream and low-credibility sources. Players get more points for sharing news from reliable sources and flagging suspicious content for fact-checking. In the process, they learn to recognize signals of source credibility, such as hyperpartisan claims and emotionally charged headlines.

Bias in society

Another source of bias comes from society. When people connect directly with their peers, the social biases that guide their selection of friends come to influence the information they see.

In fact, in our research we have found that it is possible to  determine the political leanings of a Twitter user  by simply looking at the partisan preferences of their friends. Our analysis of the structure of these  partisan communication networks  found social networks are particularly efficient at disseminating information – accurate or not – when  they are closely tied together and disconnected from other parts of society .

The tendency to evaluate information more favorably if it comes from within their own social circles creates “ echo chambers ” that are ripe for manipulation, either consciously or unintentionally. This helps explain why so many online conversations devolve into  “us versus them” confrontations .

To study how the structure of online social networks makes users vulnerable to disinformation, we built  Hoaxy , a system that tracks and visualizes the spread of content from low-credibility sources, and how it competes with fact-checking content. Our analysis of the data collected by Hoaxy during the 2016 U.S. presidential elections shows that Twitter accounts that shared misinformation were  almost completely cut off from the corrections made by the fact-checkers.

When we drilled down on the misinformation-spreading accounts, we found a very dense core group of accounts retweeting each other almost exclusively – including several bots. The only times that fact-checking organizations were ever quoted or mentioned by the users in the misinformed group were when questioning their legitimacy or claiming the opposite of what they wrote.

Bias in the machine

The third group of biases arises directly from the algorithms used to determine what people see online. Both social media platforms and search engines employ them. These personalization technologies are designed to select only the most engaging and relevant content for each individual user. But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation.

For instance, the detailed  advertising tools built into many social media platforms  let disinformation campaigners exploit  confirmation bias  by  tailoring messages  to people who are already inclined to believe them.

Also, if a user often clicks on Facebook links from a particular news source, Facebook will  tend to show that person more of that site’s content . This so-called “ filter bubble ” effect may isolate people from diverse perspectives, strengthening confirmation bias.

Our own research shows that social media platforms expose users to a less diverse set of sources than do non-social media sites like Wikipedia. Because this is at the level of a whole platform, not of a single user, we call this the  homogeneity bias .

Another important ingredient of social media is information that is trending on the platform, according to what is getting the most clicks. We call this  popularity bias , because we have found that an algorithm designed to promote popular content may negatively affect the overall quality of information on the platform. This also feeds into existing cognitive bias, reinforcing what appears to be popular irrespective of its quality.

All these algorithmic biases can be manipulated by  social bots , computer programs that interact with humans through social media accounts. Most social bots, like Twitter’s  Big Ben , are harmless. However, some conceal their real nature and are used for malicious intents, such as  boosting disinformation  or falsely  creating the appearance of a grassroots movement , also called “astroturfing.” We found  evidence of this type of manipulation  in the run-up to the 2010 U.S. midterm election.

To study these manipulation strategies, we developed a tool to detect social bots called  Botometer . Botometer uses machine learning to detect bot accounts, by inspecting thousands of different features of Twitter accounts, like the times of its posts, how often it tweets, and the accounts it follows and retweets. It is not perfect, but it has revealed that as many as  15 percent of Twitter accounts show signs of being bots .

Using Botometer in conjunction with Hoaxy, we analyzed the core of the misinformation network during the 2016 U.S. presidential campaign. We found many bots exploiting both the cognitive, confirmation and popularity biases of their victims and Twitter’s algorithmic biases.

These bots are able to construct filter bubbles around vulnerable users, feeding them false claims and misinformation. First, they can attract the attention of human users who support a particular candidate by tweeting that candidate’s hashtags or by mentioning and retweeting the person. Then the bots can amplify false claims smearing opponents by retweeting articles from low-credibility sources that match certain keywords. This activity also makes the algorithm highlight for other users false stories that are being shared widely.

Understanding complex vulnerabilities

Even as our research, and others’, shows how individuals, institutions and even entire societies can be manipulated on social media, there are  many questions  left to answer. It’s especially important to discover how these different biases interact with each other, potentially creating more complex vulnerabilities.

Tools like ours offer internet users more information about disinformation, and therefore some degree of protection from its harms. The solutions will  not likely be only technological , though there will probably be some technical aspects to them. But they must take into account  the cognitive and social aspects  of the problem.

*Editor’s note: This article was updated on Jan. 10, 2019, to remove a link to a study that has been retracted. The text of the article is still accurate, and remains unchanged.

This article was originally published on The Conversation . Read the original article .

Home — Essay Samples — Business — News Media — Biases in the News Media

test_template

Biases in The News Media

  • Categories: Media Bias Media Influence

About this sample

close

Words: 901 |

Published: Oct 2, 2020

Words: 901 | Pages: 2 | 5 min read

Image of Prof. Linda Burke

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Verified writer

  • Expert in: Business Sociology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

4 pages / 1651 words

2 pages / 1074 words

6 pages / 2811 words

6 pages / 2513 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on News Media

Cosmopolitan bias refers to the tendency to prioritize the experiences and perspectives of individuals from urban, metropolitan areas over those from rural or less densely populated areas. This bias can manifest in a variety of [...]

The media has been an integral part of society for centuries. From the early days of print media to the emergence of digital platforms, media has been a powerful tool in shaping public opinion. In today's world, the media is [...]

In Chapter 8 of Thirteen American Arguments, the author Howard Fineman delves into the pervasive influence of television in shaping public opinion and its impact on democracy. Through a blend of , cultural analysis, and critical [...]

Cosmopolitan is a popular magazine that has been around for decades, targeting a specific audience with its content. Understanding the target audience of Cosmopolitan is essential for marketers, advertisers, and anyone [...]

Investigative Journalism assumes a key part in serving society by recognizing defilement, improving straightforwardness, and fortifying general feeling. Investigative Journalism of value and pertinence is significant in itself [...]

Philippine news media is a powerful tool that is used by different organizations to connect, inform, and influence its subscribed audiences. Although this has plenty of potential for its users, it is often used to convey biased [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

essay on bias in media

Does the Media Show Bias Essay

Introduction, media biases types, media biases examples, works cited.

While today many information sources are present on the Internet, it is easy to find biases and, therefore, make conclusions implied in the news article from the beginning. Before, when media was mostly on paper, there was less available information, but biases were not so widespread too. There were several trusted media which did their best to maintain the reputation. Conversely, today there is a large number of left-, center-, and right-wing media websites, each of which presents information from its own point of view. Modern media represents a powerful tool to manipulate public opinion, and it is crucial to elucidate biases to prevent these manipulations. This essay will show that in most cases, these biases do not show in media explicitly, and thus, a reader needs to recognize them.

There are several types of media biases, and while good news reporters should know and avoid them, they may use them instead to present information from a certain point of view. The six most common types of biases are omission, source and story selection, placement, labeling, and spin (Student News Daily). Bias by omission occurs when an article leaves some relevant information, pretending that it does not exist. Source and story selection are examples of biases when reporters select only certain kinds of sources, such as right- or left-wing sources, ignoring other points of view. Placement and labeling biases occur when reporters allocate an article to the place where it is most visible or label certain people as “expert” to make some point of view more sound. Last, spin biases are based on interpretations: an article interprets some event or policy in one way, ignoring other possible interpretations.

An excellent example of media biases can be shown in three articles describing one single event: an accident on the Norfolk Southern railroad near East Palestine, Ohio. Figure 1 shows three headings from various resources, all dedicated to the same theme, but one can see how different they present it, even by titles (AllSlides). All three articles will be described below to demonstrate how various media can be biased.

Three articles about the train derailment in Ohio, which caused severe ecological damage. Each has a different message while discussing the same facts (AllSlides)

In the article from ABC News, the fact is presented in a way that Norfolk Southern’s CEO faces grilling from a Senate, being harshly accused in this calamity. Despite mentioning that CEO apologized and made efforts to improve the situation, the article focuses on that Norfolk Southern experienced a large number of accidents and should be thoroughly examined to prevent them in the future (Pecorin and Pezenik). As ABC News is considered a left-wing media, its article is overly critical of Norfolk Southern and clearly shows the condemnation of its actions.

The article of Trains.com, while having a similar title, consists primarily of facts and directly presents Norfolk Southern’s CEO dialogue with the Senate members. The article focuses on flaws that are present in the company, such as a lack of protection from toxic compounds, which made the accident possible (Stephens). It also describes Senate’s requirements for the company to improve safety, such as installing 200 additional hotbox detectors and ensuring paid sick days to all workers.

Finally, Fox Business, which presents business news and information and has a clear right-wing leaning, says nothing about the grilling but emphasizes that the CEO apologized to Senate for the calamity. The article begins by describing the CEO’s speech, apologizes, and promises to help everyone who suffered from the railroad incident (Wallace). It is mentioned that more than $20 million is already spent to help families located in East Palestine. At the end of the article, Senate investigations are mentioned, but neither CEO’s dialog with them nor their requirements are described.

Word choice, fact present, and the general new source reputation all contribute to the biases that can easily be found in modern media. One can see biases by omission in all three mentioned articles (Student News Daily). Fox Business, a right-wing media favoring large businesses, mentioned mainly the CEO’s speech and that he helped suffering families with more than $20 million, only shortly noting that Senate initiated investigations (Wallace). Conversely, being left-wing media, ABC News is most critical toward Norfolk Southern and its CEO, emphasizing the company’s flaws and not mentioning the $20 million spent to support the damaged community (Pecorin and Pezenik). Trains.com is the most objective article, showing the existing data and the dialog between CEO and Senate members, letting readers make their own opinion (Stephens). Thus, while a media can reduce bias through the conscientious work of its employees, the risk of biases will still be present, depending on the point of view from which the information is presented.

Media employees can work to reduce biases, describe various points of view explicitly, and present readers with objective facts from which they can make conclusions by themselves. However, it is hard to notice all possible biases, and it is easy to concentrate on one point of view and choose sources and stories selectively, ignoring others. In addition, there are many interpretations of a single event, and each media can select its own interpretation, which is essential to remember. The example of three articles describing the Norfolk Southern railroad incident shows how different attitudes and fact collections can be. They range from condemning the company for the incident to showing that it does its best to help victims and prevent future calamities. It is important to know which biases can be present and how different information can be shown to elucidate biases and perceive all related data and as many points of view as possible.

AllSlides. “Norfolk Southern CEO Apologizes for Ohio Train Crash in Senate Hearing.” AllSides , Web.

Pecorin, Allison, and Sasha Pezenik. “ Norfolk Southern CEO Faces Senate Grilling over Toxic Train Derailment in East Palestine, Ohio .” ABC News , 2023, Web.

Stephens, Bill. “ Senate Committee Grills Norfolk Southern CEO about East Palestine Derailment .” Trains , 2023, Web.

Student News Daily. “ Media Bias .” Student News Daily , 2021, Web.

Wallace, Danielle. “ Norfolk Southern CEO Apologizes for East Palestine, Ohio, Train Derailment in Senate Testimony .” FOXBusiness , Web.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2024, February 28). Does the Media Show Bias. https://ivypanda.com/essays/does-the-media-show-bias/

"Does the Media Show Bias." IvyPanda , 28 Feb. 2024, ivypanda.com/essays/does-the-media-show-bias/.

IvyPanda . (2024) 'Does the Media Show Bias'. 28 February.

IvyPanda . 2024. "Does the Media Show Bias." February 28, 2024. https://ivypanda.com/essays/does-the-media-show-bias/.

1. IvyPanda . "Does the Media Show Bias." February 28, 2024. https://ivypanda.com/essays/does-the-media-show-bias/.

Bibliography

IvyPanda . "Does the Media Show Bias." February 28, 2024. https://ivypanda.com/essays/does-the-media-show-bias/.

  • The Norfolk Department of Human Services: Leadership
  • Norfolk Southern Corporation: Company Analysis
  • Norfolk Department of Human Services and Its Goals
  • Student Loans in Norfolk, Virginia
  • Organizational Change: Norfolk Department of Human Services
  • Norfolk Human Services Agency and Its Practice
  • Documentary Presentation: The Confessions
  • The Right-Wing Populism in Europe
  • The Left-Right Spectrum Overview
  • Government and Good Environment That Promotes Good Health
  • “Super Bowl LVI Today: Day 1” Media Analysis
  • Sociological Media Analysis: “The Bachelor” and “One Day at a Time”
  • Gill's Representation of Disney Princesses in the Media
  • Issues in Contemporary Media and Culture
  • The YouTube Video "How to Choose Your News" by Damon Brown
  • Share full article

Advertisement

Supported by

NPR in Turmoil After It Is Accused of Liberal Bias

An essay from an editor at the broadcaster has generated a firestorm of criticism about the network on social media, especially among conservatives.

Uri Berliner, wearing a dark zipped sweater over a white T-shirt, sits in a darkened room, a big plant and a yellow sofa behind him.

By Benjamin Mullin and Katie Robertson

NPR is facing both internal tumult and a fusillade of attacks by prominent conservatives this week after a senior editor publicly claimed the broadcaster had allowed liberal bias to affect its coverage, risking its trust with audiences.

Uri Berliner, a senior business editor who has worked at NPR for 25 years, wrote in an essay published Tuesday by The Free Press, a popular Substack publication, that “people at every level of NPR have comfortably coalesced around the progressive worldview.”

Mr. Berliner, a Peabody Award-winning journalist, castigated NPR for what he said was a litany of journalistic missteps around coverage of several major news events, including the origins of Covid-19 and the war in Gaza. He also said the internal culture at NPR had placed race and identity as “paramount in nearly every aspect of the workplace.”

Mr. Berliner’s essay has ignited a firestorm of criticism of NPR on social media, especially among conservatives who have long accused the network of political bias in its reporting. Former President Donald J. Trump took to his social media platform, Truth Social, to argue that NPR’s government funding should be rescinded, an argument he has made in the past.

NPR has forcefully pushed back on Mr. Berliner’s accusations and the criticism.

“We’re proud to stand behind the exceptional work that our desks and shows do to cover a wide range of challenging stories,” Edith Chapin, the organization’s editor in chief, said in an email to staff on Tuesday. “We believe that inclusion — among our staff, with our sourcing, and in our overall coverage — is critical to telling the nuanced stories of this country and our world.” Some other NPR journalists also criticized the essay publicly, including Eric Deggans, its TV critic, who faulted Mr. Berliner for not giving NPR an opportunity to comment on the piece.

In an interview on Thursday, Mr. Berliner expressed no regrets about publishing the essay, saying he loved NPR and hoped to make it better by airing criticisms that have gone unheeded by leaders for years. He called NPR a “national trust” that people rely on for fair reporting and superb storytelling.

“I decided to go out and publish it in hopes that something would change, and that we get a broader conversation going about how the news is covered,” Mr. Berliner said.

He said he had not been disciplined by managers, though he said he had received a note from his supervisor reminding him that NPR requires employees to clear speaking appearances and media requests with standards and media relations. He said he didn’t run his remarks to The New York Times by network spokespeople.

When the hosts of NPR’s biggest shows, including “Morning Edition” and “All Things Considered,” convened on Wednesday afternoon for a long-scheduled meet-and-greet with the network’s new chief executive, Katherine Maher , conversation soon turned to Mr. Berliner’s essay, according to two people with knowledge of the meeting. During the lunch, Ms. Chapin told the hosts that she didn’t want Mr. Berliner to become a “martyr,” the people said.

Mr. Berliner’s essay also sent critical Slack messages whizzing through some of the same employee affinity groups focused on racial and sexual identity that he cited in his essay. In one group, several staff members disputed Mr. Berliner’s points about a lack of ideological diversity and said efforts to recruit more people of color would make NPR’s journalism better.

On Wednesday, staff members from “Morning Edition” convened to discuss the fallout from Mr. Berliner’s essay. During the meeting, an NPR producer took issue with Mr. Berliner’s argument for why NPR’s listenership has fallen off, describing a variety of factors that have contributed to the change.

Mr. Berliner’s remarks prompted vehement pushback from several news executives. Tony Cavin, NPR’s managing editor of standards and practices, said in an interview that he rejected all of Mr. Berliner’s claims of unfairness, adding that his remarks would probably make it harder for NPR journalists to do their jobs.

“The next time one of our people calls up a Republican congressman or something and tries to get an answer from them, they may well say, ‘Oh, I read these stories, you guys aren’t fair, so I’m not going to talk to you,’” Mr. Cavin said.

Some journalists have defended Mr. Berliner’s essay. Jeffrey A. Dvorkin, NPR’s former ombudsman, said Mr. Berliner was “not wrong” on social media. Chuck Holmes, a former managing editor at NPR, called Mr. Berliner’s essay “brave” on Facebook.

Mr. Berliner’s criticism was the latest salvo within NPR, which is no stranger to internal division. In October, Mr. Berliner took part in a lengthy debate over whether NPR should defer to language proposed by the Arab and Middle Eastern Journalists Association while covering the conflict in Gaza.

“We don’t need to rely on an advocacy group’s guidance,” Mr. Berliner wrote, according to a copy of the email exchange viewed by The Times. “Our job is to seek out the facts and report them.” The debate didn’t change NPR’s language guidance, which is made by editors who weren’t part of the discussion. And in a statement on Thursday, the Arab and Middle Eastern Journalists Association said it is a professional association for journalists, not a political advocacy group.

Mr. Berliner’s public criticism has highlighted broader concerns within NPR about the public broadcaster’s mission amid continued financial struggles. Last year, NPR cut 10 percent of its staff and canceled four podcasts, including the popular “Invisibilia,” as it tried to make up for a $30 million budget shortfall. Listeners have drifted away from traditional radio to podcasts, and the advertising market has been unsteady.

In his essay, Mr. Berliner laid some of the blame at the feet of NPR’s former chief executive, John Lansing, who said he was retiring at the end of last year after four years in the role. He was replaced by Ms. Maher, who started on March 25.

During a meeting with employees in her first week, Ms. Maher was asked what she thought about decisions to give a platform to political figures like Ronna McDaniel, the former Republican Party chair whose position as a political analyst at NBC News became untenable after an on-air revolt from hosts who criticized her efforts to undermine the 2020 election.

“I think that this conversation has been one that does not have an easy answer,” Ms. Maher responded.

Benjamin Mullin reports on the major companies behind news and entertainment. Contact Ben securely on Signal at +1 530-961-3223 or email at [email protected] . More about Benjamin Mullin

Katie Robertson covers the media industry for The Times. Email:  [email protected]   More about Katie Robertson

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Fake news, disinformation and misinformation in social media: a review

Esma aïmeur.

Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada

Sabrine Amri

Gilles brassard, associated data.

All the data and material are available in the papers cited in the references.

Online social networks (OSNs) are rapidly growing and have become a huge source of all kinds of global and local news for millions of users. However, OSNs are a double-edged sword. Although the great advantages they offer such as unlimited easy communication and instant news and information, they can also have many disadvantages and issues. One of their major challenging issues is the spread of fake news. Fake news identification is still a complex unresolved issue. Furthermore, fake news detection on OSNs presents unique characteristics and challenges that make finding a solution anything but trivial. On the other hand, artificial intelligence (AI) approaches are still incapable of overcoming this challenging problem. To make matters worse, AI techniques such as machine learning and deep learning are leveraged to deceive people by creating and disseminating fake content. Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed in a way to closely resemble the truth, and it is often hard to determine its veracity by AI alone without additional information from third parties. This work aims to provide a comprehensive and systematic review of fake news research as well as a fundamental review of existing approaches used to detect and prevent fake news from spreading via OSNs. We present the research problem and the existing challenges, discuss the state of the art in existing approaches for fake news detection, and point out the future research directions in tackling the challenges.

Introduction

Context and motivation.

Fake news, disinformation and misinformation have become such a scourge that Marcia McNutt, president of the National Academy of Sciences of the United States, is quoted to have said (making an implicit reference to the COVID-19 pandemic) “Misinformation is worse than an epidemic: It spreads at the speed of light throughout the globe and can prove deadly when it reinforces misplaced personal bias against all trustworthy evidence” in a joint statement of the National Academies 1 posted on July 15, 2021. Indeed, although online social networks (OSNs), also called social media, have improved the ease with which real-time information is broadcast; its popularity and its massive use have expanded the spread of fake news by increasing the speed and scope at which it can spread. Fake news may refer to the manipulation of information that can be carried out through the production of false information, or the distortion of true information. However, that does not mean that this problem is only created with social media. A long time ago, there were rumors in the traditional media that Elvis was not dead, 2 that the Earth was flat, 3 that aliens had invaded us, 4 , etc.

Therefore, social media has become nowadays a powerful source for fake news dissemination (Sharma et al. 2019 ; Shu et al. 2017 ). According to Pew Research Center’s analysis of the news use across social media platforms, in 2020, about half of American adults get news on social media at least sometimes, 5 while in 2018, only one-fifth of them say they often get news via social media. 6

Hence, fake news can have a significant impact on society as manipulated and false content is easier to generate and harder to detect (Kumar and Shah 2018 ) and as disinformation actors change their tactics (Kumar and Shah 2018 ; Micallef et al. 2020 ). In 2017, Snow predicted in the MIT Technology Review (Snow 2017 ) that most individuals in mature economies will consume more false than valid information by 2022.

Recent news on the COVID-19 pandemic, which has flooded the web and created panic in many countries, has been reported as fake. 7 For example, holding your breath for ten seconds to one minute is not a self-test for COVID-19 8 (see Fig.  1 ). Similarly, online posts claiming to reveal various “cures” for COVID-19 such as eating boiled garlic or drinking chlorine dioxide (which is an industrial bleach), were verified 9 as fake and in some cases as dangerous and will never cure the infection.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig1_HTML.jpg

Fake news example about a self-test for COVID-19 source: https://cdn.factcheck.org/UploadedFiles/Screenshot031120_false.jpg , last access date: 26-12-2022

Social media outperformed television as the major news source for young people of the UK and the USA. 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017 ). Furthermore, it has been reported in a previous study about the spread of online news on Twitter (Vosoughi et al. 2018 ) that the spread of false news online is six times faster than truthful content and that 70% of the users could not distinguish real from fake news (Vosoughi et al. 2018 ) due to the attraction of the novelty of the latter (Bovet and Makse 2019 ). It was determined that falsehood spreads significantly farther, faster, deeper and more broadly than the truth in all categories of information, and the effects are more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information (Vosoughi et al. 2018 ).

Over 1 million tweets were estimated to be related to fake news by the end of the 2016 US presidential election. 11 In 2017, in Germany, a government spokesman affirmed: “We are dealing with a phenomenon of a dimension that we have not seen before,” referring to an unprecedented spread of fake news on social networks. 12 Given the strength of this new phenomenon, fake news has been chosen as the word of the year by the Macquarie dictionary both in 2016 13 and in 2018 14 as well as by the Collins dictionary in 2017. 15 , 16 Since 2020, the new term “infodemic” was coined, reflecting widespread researchers’ concern (Gupta et al. 2022 ; Apuke and Omar 2021 ; Sharma et al. 2020 ; Hartley and Vu 2020 ; Micallef et al. 2020 ) about the proliferation of misinformation linked to the COVID-19 pandemic.

The Gartner Group’s top strategic predictions for 2018 and beyond included the need for IT leaders to quickly develop Artificial Intelligence (AI) algorithms to address counterfeit reality and fake news. 17 However, fake news identification is a complex issue. (Snow 2017 ) questioned the ability of AI to win the war against fake news. Similarly, other researchers concurred that even the best AI for spotting fake news is still ineffective. 18 Besides, recent studies have shown that the power of AI algorithms for identifying fake news is lower than its ability to create it Paschen ( 2019 ). Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed to closely resemble the truth in order to deceive users, and as a result, it is often hard to determine its veracity by AI alone. Therefore, it is crucial to consider more effective approaches to solve the problem of fake news in social media.

Contribution

The fake news problem has been addressed by researchers from various perspectives related to different topics. These topics include, but are not restricted to, social science studies , which investigate why and who falls for fake news (Altay et al. 2022 ; Batailler et al. 2022 ; Sterret et al. 2018 ; Badawy et al. 2019 ; Pennycook and Rand 2020 ; Weiss et al. 2020 ; Guadagno and Guttieri 2021 ), whom to trust and how perceptions of misinformation and disinformation relate to media trust and media consumption patterns (Hameleers et al. 2022 ), how fake news differs from personal lies (Chiu and Oh 2021 ; Escolà-Gascón 2021 ), examine how can the law regulate digital disinformation and how governments can regulate the values of social media companies that themselves regulate disinformation spread on their platforms (Marsden et al. 2020 ; Schuyler 2019 ; Vasu et al. 2018 ; Burshtein 2017 ; Waldman 2017 ; Alemanno 2018 ; Verstraete et al. 2017 ), and argue the challenges to democracy (Jungherr and Schroeder 2021 ); Behavioral interventions studies , which examine what literacy ideas mean in the age of dis/mis- and malinformation (Carmi et al. 2020 ), investigate whether media literacy helps identification of fake news (Jones-Jang et al. 2021 ) and attempt to improve people’s news literacy (Apuke et al. 2022 ; Dame Adjin-Tettey 2022 ; Hameleers 2022 ; Nagel 2022 ; Jones-Jang et al. 2021 ; Mihailidis and Viotty 2017 ; García et al. 2020 ) by encouraging people to pause to assess credibility of headlines (Fazio 2020 ), promote civic online reasoning (McGrew 2020 ; McGrew et al. 2018 ) and critical thinking (Lutzke et al. 2019 ), together with evaluations of credibility indicators (Bhuiyan et al. 2020 ; Nygren et al. 2019 ; Shao et al. 2018a ; Pennycook et al. 2020a , b ; Clayton et al. 2020 ; Ozturk et al. 2015 ; Metzger et al. 2020 ; Sherman et al. 2020 ; Nekmat 2020 ; Brashier et al. 2021 ; Chung and Kim 2021 ; Lanius et al. 2021 ); as well as social media-driven studies , which investigate the effect of signals (e.g., sources) to detect and recognize fake news (Vraga and Bode 2017 ; Jakesch et al. 2019 ; Shen et al. 2019 ; Avram et al. 2020 ; Hameleers et al. 2020 ; Dias et al. 2020 ; Nyhan et al. 2020 ; Bode and Vraga 2015 ; Tsang 2020 ; Vishwakarma et al. 2019 ; Yavary et al. 2020 ) and investigate fake and reliable news sources using complex networks analysis based on search engine optimization metric (Mazzeo and Rapisarda 2022 ).

The impacts of fake news have reached various areas and disciplines beyond online social networks and society (García et al. 2020 ) such as economics (Clarke et al. 2020 ; Kogan et al. 2019 ; Goldstein and Yang 2019 ), psychology (Roozenbeek et al. 2020a ; Van der Linden and Roozenbeek 2020 ; Roozenbeek and van der Linden 2019 ), political science (Valenzuela et al. 2022 ; Bringula et al. 2022 ; Ricard and Medeiros 2020 ; Van der Linden et al. 2020 ; Allcott and Gentzkow 2017 ; Grinberg et al. 2019 ; Guess et al. 2019 ; Baptista and Gradim 2020 ), health science (Alonso-Galbán and Alemañy-Castilla 2022 ; Desai et al. 2022 ; Apuke and Omar 2021 ; Escolà-Gascón 2021 ; Wang et al. 2019c ; Hartley and Vu 2020 ; Micallef et al. 2020 ; Pennycook et al. 2020b ; Sharma et al. 2020 ; Roozenbeek et al. 2020b ), environmental science (e.g., climate change) (Treen et al. 2020 ; Lutzke et al. 2019 ; Lewandowsky 2020 ; Maertens et al. 2020 ), etc.

Interesting research has been carried out to review and study the fake news issue in online social networks. Some focus not only on fake news, but also distinguish between fake news and rumor (Bondielli and Marcelloni 2019 ; Meel and Vishwakarma 2020 ), while others tackle the whole problem, from characterization to processing techniques (Shu et al. 2017 ; Guo et al. 2020 ; Zhou and Zafarani 2020 ). However, they mostly focus on studying approaches from a machine learning perspective (Bondielli and Marcelloni 2019 ), data mining perspective (Shu et al. 2017 ), crowd intelligence perspective (Guo et al. 2020 ), or knowledge-based perspective (Zhou and Zafarani 2020 ). Furthermore, most of these studies ignore at least one of the mentioned perspectives, and in many cases, they do not cover other existing detection approaches using methods such as blockchain and fact-checking, as well as analysis on metrics used for Search Engine Optimization (Mazzeo and Rapisarda 2022 ). However, in our work and to the best of our knowledge, we cover all the approaches used for fake news detection. Indeed, we investigate the proposed solutions from broader perspectives (i.e., the detection techniques that are used, as well as the different aspects and types of the information used).

Therefore, in this paper, we are highly motivated by the following facts. First, fake news detection on social media is still in the early age of development, and many challenging issues remain that require deeper investigation. Hence, it is necessary to discuss potential research directions that can improve fake news detection and mitigation tasks. However, the dynamic nature of fake news propagation through social networks further complicates matters (Sharma et al. 2019 ). False information can easily reach and impact a large number of users in a short time (Friggeri et al. 2014 ; Qian et al. 2018 ). Moreover, fact-checking organizations cannot keep up with the dynamics of propagation as they require human verification, which can hold back a timely and cost-effective response (Kim et al. 2018 ; Ruchansky et al. 2017 ; Shu et al. 2018a ).

Our work focuses primarily on understanding the “fake news” problem, its related challenges and root causes, and reviewing automatic fake news detection and mitigation methods in online social networks as addressed by researchers. The main contributions that differentiate us from other works are summarized below:

  • We present the general context from which the fake news problem emerged (i.e., online deception)
  • We review existing definitions of fake news, identify the terms and features most commonly used to define fake news, and categorize related works accordingly.
  • We propose a fake news typology classification based on the various categorizations of fake news reported in the literature.
  • We point out the most challenging factors preventing researchers from proposing highly effective solutions for automatic fake news detection in social media.
  • We highlight and classify representative studies in the domain of automatic fake news detection and mitigation on online social networks including the key methods and techniques used to generate detection models.
  • We discuss the key shortcomings that may inhibit the effectiveness of the proposed fake news detection methods in online social networks.
  • We provide recommendations that can help address these shortcomings and improve the quality of research in this domain.

The rest of this article is organized as follows. We explain the methodology with which the studied references are collected and selected in Sect.  2 . We introduce the online deception problem in Sect.  3 . We highlight the modern-day problem of fake news in Sect.  4 , followed by challenges facing fake news detection and mitigation tasks in Sect.  5 . We provide a comprehensive literature review of the most relevant scholarly works on fake news detection in Sect.  6 . We provide a critical discussion and recommendations that may fill some of the gaps we have identified, as well as a classification of the reviewed automatic fake news detection approaches, in Sect.  7 . Finally, we provide a conclusion and propose some future directions in Sect.  8 .

Review methodology

This section introduces the systematic review methodology on which we relied to perform our study. We start with the formulation of the research questions, which allowed us to select the relevant research literature. Then, we provide the different sources of information together with the search and inclusion/exclusion criteria we used to select the final set of papers.

Research questions formulation

The research scope, research questions, and inclusion/exclusion criteria were established following an initial evaluation of the literature and the following research questions were formulated and addressed.

  • RQ1: what is fake news in social media, how is it defined in the literature, what are its related concepts, and the different types of it?
  • RQ2: What are the existing challenges and issues related to fake news?
  • RQ3: What are the available techniques used to perform fake news detection in social media?

Sources of information

We broadly searched for journal and conference research articles, books, and magazines as a source of data to extract relevant articles. We used the main sources of scientific databases and digital libraries in our search, such as Google Scholar, 19 IEEE Xplore, 20 Springer Link, 21 ScienceDirect, 22 Scopus, 23 ACM Digital Library. 24 Also, we screened most of the related high-profile conferences such as WWW, SIGKDD, VLDB, ICDE and so on to find out the recent work.

Search criteria

We focused our research over a period of ten years, but we made sure that about two-thirds of the research papers that we considered were published in or after 2019. Additionally, we defined a set of keywords to search the above-mentioned scientific databases since we concentrated on reviewing the current state of the art in addition to the challenges and the future direction. The set of keywords includes the following terms: fake news, disinformation, misinformation, information disorder, social media, detection techniques, detection methods, survey, literature review.

Study selection, exclusion and inclusion criteria

To retrieve relevant research articles, based on our sources of information and search criteria, a systematic keyword-based search was carried out by posing different search queries, as shown in Table  1 .

List of keywords for searching relevant articles

We discovered a primary list of articles. On the obtained initial list of studies, we applied a set of inclusion/exclusion criteria presented in Table  2 to select the appropriate research papers. The inclusion and exclusion principles are applied to determine whether a study should be included or not.

Inclusion and exclusion criteria

After reading the abstract, we excluded some articles that did not meet our criteria. We chose the most important research to help us understand the field. We reviewed the articles completely and found only 61 research papers that discuss the definition of the term fake news and its related concepts (see Table  4 ). We used the remaining papers to understand the field, reveal the challenges, review the detection techniques, and discuss future directions.

Classification of fake news definitions based on the used term and features

A brief introduction of online deception

The Cambridge Online Dictionary defines Deception as “ the act of hiding the truth, especially to get an advantage .” Deception relies on peoples’ trust, doubt and strong emotions that may prevent them from thinking and acting clearly (Aïmeur et al. 2018 ). We also define it in previous work (Aïmeur et al. 2018 ) as the process that undermines the ability to consciously make decisions and take convenient actions, following personal values and boundaries. In other words, deception gets people to do things they would not otherwise do. In the context of online deception, several factors need to be considered: the deceiver, the purpose or aim of the deception, the social media service, the deception technique and the potential target (Aïmeur et al. 2018 ; Hage et al. 2021 ).

Researchers are working on developing new ways to protect users and prevent online deception (Aïmeur et al. 2018 ). Due to the sophistication of attacks, this is a complex task. Hence, malicious attackers are using more complex tools and strategies to deceive users. Furthermore, the way information is organized and exchanged in social media may lead to exposing OSN users to many risks (Aïmeur et al. 2013 ).

In fact, this field is one of the recent research areas that need collaborative efforts of multidisciplinary practices such as psychology, sociology, journalism, computer science as well as cyber-security and digital marketing (which are not yet well explored in the field of dis/mis/malinformation but relevant for future research). Moreover, Ismailov et al. ( 2020 ) analyzed the main causes that could be responsible for the efficiency gap between laboratory results and real-world implementations.

In this paper, it is not in our scope of work to review online deception state of the art. However, we think it is crucial to note that fake news, misinformation and disinformation are indeed parts of the larger landscape of online deception (Hage et al. 2021 ).

Fake news, the modern-day problem

Fake news has existed for a very long time, much before their wide circulation became facilitated by the invention of the printing press. 25 For instance, Socrates was condemned to death more than twenty-five hundred years ago under the fake news that he was guilty of impiety against the pantheon of Athens and corruption of the youth. 26 A Google Trends Analysis of the term “fake news” reveals an explosion in popularity around the time of the 2016 US presidential election. 27 Fake news detection is a problem that has recently been addressed by numerous organizations, including the European Union 28 and NATO. 29

In this section, we first overview the fake news definitions as they were provided in the literature. We identify the terms and features used in the definitions, and we classify the latter based on them. Then, we provide a fake news typology based on distinct categorizations that we propose, and we define and compare the most cited forms of one specific fake news category (i.e., the intent-based fake news category).

Definitions of fake news

“Fake news” is defined in the Collins English Dictionary as false and often sensational information disseminated under the guise of news reporting, 30 yet the term has evolved over time and has become synonymous with the spread of false information (Cooke 2017 ).

The first definition of the term fake news was provided by Allcott and Gentzkow ( 2017 ) as news articles that are intentionally and verifiably false and could mislead readers. Then, other definitions were provided in the literature, but they all agree on the authenticity of fake news to be false (i.e., being non-factual). However, they disagree on the inclusion and exclusion of some related concepts such as satire , rumors , conspiracy theories , misinformation and hoaxes from the given definition. More recently, Nakov ( 2020 ) reported that the term fake news started to mean different things to different people, and for some politicians, it even means “news that I do not like.”

Hence, there is still no agreed definition of the term “fake news.” Moreover, we can find many terms and concepts in the literature that refer to fake news (Van der Linden et al. 2020 ; Molina et al. 2021 ) (Abu Arqoub et al. 2022 ; Allen et al. 2020 ; Allcott and Gentzkow 2017 ; Shu et al. 2017 ; Sharma et al. 2019 ; Zhou and Zafarani 2020 ; Zhang and Ghorbani 2020 ; Conroy et al. 2015 ; Celliers and Hattingh 2020 ; Nakov 2020 ; Shu et al. 2020c ; Jin et al. 2016 ; Rubin et al. 2016 ; Balmas 2014 ; Brewer et al. 2013 ; Egelhofer and Lecheler 2019 ; Mustafaraj and Metaxas 2017 ; Klein and Wueller 2017 ; Potthast et al. 2017 ; Lazer et al. 2018 ; Weiss et al. 2020 ; Tandoc Jr et al. 2021 ; Guadagno and Guttieri 2021 ), disinformation (Kapantai et al. 2021 ; Shu et al. 2020a , c ; Kumar et al. 2016 ; Bhattacharjee et al. 2020 ; Marsden et al. 2020 ; Jungherr and Schroeder 2021 ; Starbird et al. 2019 ; Ireton and Posetti 2018 ), misinformation (Wu et al. 2019 ; Shu et al. 2020c ; Shao et al. 2016 , 2018b ; Pennycook and Rand 2019 ; Micallef et al. 2020 ), malinformation (Dame Adjin-Tettey 2022 ) (Carmi et al. 2020 ; Shu et al. 2020c ), false information (Kumar and Shah 2018 ; Guo et al. 2020 ; Habib et al. 2019 ), information disorder (Shu et al. 2020c ; Wardle and Derakhshan 2017 ; Wardle 2018 ; Derakhshan and Wardle 2017 ), information warfare (Guadagno and Guttieri 2021 ) and information pollution (Meel and Vishwakarma 2020 ).

There is also a remarkable amount of disagreement over the classification of the term fake news in the research literature, as well as in policy (de Cock Buning 2018 ; ERGA 2018 , 2021 ). Some consider fake news as a type of misinformation (Allen et al. 2020 ; Singh et al. 2021 ; Ha et al. 2021 ; Pennycook and Rand 2019 ; Shao et al. 2018b ; Di Domenico et al. 2021 ; Sharma et al. 2019 ; Celliers and Hattingh 2020 ; Klein and Wueller 2017 ; Potthast et al. 2017 ; Islam et al. 2020 ), others consider it as a type of disinformation (de Cock Buning 2018 ) (Bringula et al. 2022 ; Baptista and Gradim 2022 ; Tsang 2020 ; Tandoc Jr et al. 2021 ; Bastick 2021 ; Khan et al. 2019 ; Shu et al. 2017 ; Nakov 2020 ; Shu et al. 2020c ; Egelhofer and Lecheler 2019 ), while others associate the term with both disinformation and misinformation (Wu et al. 2022 ; Dame Adjin-Tettey 2022 ; Hameleers et al. 2022 ; Carmi et al. 2020 ; Allcott and Gentzkow 2017 ; Zhang and Ghorbani 2020 ; Potthast et al. 2017 ; Weiss et al. 2020 ; Tandoc Jr et al. 2021 ; Guadagno and Guttieri 2021 ). On the other hand, some prefer to differentiate fake news from both terms (ERGA 2018 ; Molina et al. 2021 ; ERGA 2021 ) (Zhou and Zafarani 2020 ; Jin et al. 2016 ; Rubin et al. 2016 ; Balmas 2014 ; Brewer et al. 2013 ).

The existing terms can be separated into two groups. The first group represents the general terms, which are information disorder , false information and fake news , each of which includes a subset of terms from the second group. The second group represents the elementary terms, which are misinformation , disinformation and malinformation . The literature agrees on the definitions of the latter group, but there is still no agreed-upon definition of the first group. In Fig.  2 , we model the relationship between the most used terms in the literature.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig2_HTML.jpg

Modeling of the relationship between terms related to fake news

The terms most used in the literature to refer, categorize and classify fake news can be summarized and defined as shown in Table  3 , in which we capture the similarities and show the differences between the different terms based on two common key features, which are the intent and the authenticity of the news content. The intent feature refers to the intention behind the term that is used (i.e., whether or not the purpose is to mislead or cause harm), whereas the authenticity feature refers to its factual aspect. (i.e., whether the content is verifiably false or not, which we label as genuine in the second case). Some of these terms are explicitly used to refer to fake news (i.e., disinformation, misinformation and false information), while others are not (i.e., malinformation). In the comparison table, the empty dash (–) cell denotes that the classification does not apply.

A comparison between used terms based on intent and authenticity

In Fig.  3 , we identify the different features used in the literature to define fake news (i.e., intent, authenticity and knowledge). Hence, some definitions are based on two key features, which are authenticity and intent (i.e., news articles that are intentionally and verifiably false and could mislead readers). However, other definitions are based on either authenticity or intent. Other researchers categorize false information on the web and social media based on its intent and knowledge (i.e., when there is a single ground truth). In Table  4 , we classify the existing fake news definitions based on the used term and the used features . In the classification, the references in the cells refer to the research study in which a fake news definition was provided, while the empty dash (–) cells denote that the classification does not apply.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig3_HTML.jpg

The features used for fake news definition

Fake news typology

Various categorizations of fake news have been provided in the literature. We can distinguish two major categories of fake news based on the studied perspective (i.e., intention or content) as shown in Fig.  4 . However, our proposed fake news typology is not about detection methods, and it is not exclusive. Hence, a given category of fake news can be described based on both perspectives (i.e., intention and content) at the same time. For instance, satire (i.e., intent-based fake news) can contain text and/or multimedia content types of data (e.g., headline, body, image, video) (i.e., content-based fake news) and so on.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig4_HTML.jpg

Most researchers classify fake news based on the intent (Collins et al. 2020 ; Bondielli and Marcelloni 2019 ; Zannettou et al. 2019 ; Kumar et al. 2016 ; Wardle 2017 ; Shu et al. 2017 ; Kumar and Shah 2018 ) (see Sect.  4.2.2 ). However, other researchers (Parikh and Atrey 2018 ; Fraga-Lamas and Fernández-Caramés 2020 ; Hasan and Salah 2019 ; Masciari et al. 2020 ; Bakdash et al. 2018 ; Elhadad et al. 2019 ; Yang et al. 2019b ) focus on the content to categorize types of fake news through distinguishing the different formats and content types of data in the news (e.g., text and/or multimedia).

Recently, another classification was proposed by Zhang and Ghorbani ( 2020 ). It is based on the combination of content and intent to categorize fake news. They distinguish physical news content and non-physical news content from fake news. Physical content consists of the carriers and format of the news, and non-physical content consists of the opinions, emotions, attitudes and sentiments that the news creators want to express.

Content-based fake news category

According to researchers of this category (Parikh and Atrey 2018 ; Fraga-Lamas and Fernández-Caramés 2020 ; Hasan and Salah 2019 ; Masciari et al. 2020 ; Bakdash et al. 2018 ; Elhadad et al. 2019 ; Yang et al. 2019b ), forms of fake news may include false text such as hyperlinks or embedded content; multimedia such as false videos (Demuyakor and Opata 2022 ), images (Masciari et al. 2020 ; Shen et al. 2019 ), audios (Demuyakor and Opata 2022 ) and so on. Moreover, we can also find multimodal content (Shu et al. 2020a ) that is fake news articles and posts composed of multiple types of data combined together, for example, a fabricated image along with a text related to the image (Shu et al. 2020a ). In this category of fake news forms, we can mention as examples deepfake videos (Yang et al. 2019b ) and GAN-generated fake images (Zhang et al. 2019b ), which are artificial intelligence-based machine-generated fake content that are hard for unsophisticated social network users to identify.

The effects of these forms of fake news content vary on the credibility assessment, as well as sharing intentions which influences the spread of fake news on OSNs. For instance, people with little knowledge about the issue compared to those who are strongly concerned about the key issue of fake news tend to be easier to convince that the misleading or fake news is real, especially when shared via a video modality as compared to the text or the audio modality (Demuyakor and Opata 2022 ).

Intent-based Fake News Category

The most often mentioned and discussed forms of fake news according to researchers in this category include but are not restricted to clickbait , hoax , rumor , satire , propaganda , framing , conspiracy theories and others. In the following subsections, we explain these types of fake news as they were defined in the literature and undertake a brief comparison between them as depicted in Table  5 . The following are the most cited forms of intent-based types of fake news, and their comparison is based on what we suspect are the most common criteria mentioned by researchers.

A comparison between the different types of intent-based fake news

Clickbait refers to misleading headlines and thumbnails of content on the web (Zannettou et al. 2019 ) that tend to be fake stories with catchy headlines aimed at enticing the reader to click on a link (Collins et al. 2020 ). This type of fake news is considered to be the least severe type of false information because if a user reads/views the whole content, it is possible to distinguish if the headline and/or the thumbnail was misleading (Zannettou et al. 2019 ). However, the goal behind using clickbait is to increase the traffic to a website (Zannettou et al. 2019 ).

A hoax is a false (Zubiaga et al. 2018 ) or inaccurate (Zannettou et al. 2019 ) intentionally fabricated (Collins et al. 2020 ) news story used to masquerade the truth (Zubiaga et al. 2018 ) and is presented as factual (Zannettou et al. 2019 ) to deceive the public or audiences (Collins et al. 2020 ). This category is also known either as half-truth or factoid stories (Zannettou et al. 2019 ). Popular examples of hoaxes are stories that report the false death of celebrities (Zannettou et al. 2019 ) and public figures (Collins et al. 2020 ). Recently, hoaxes about the COVID-19 have been circulating through social media.

The term rumor refers to ambiguous or never confirmed claims (Zannettou et al. 2019 ) that are disseminated with a lack of evidence to support them (Sharma et al. 2019 ). This kind of information is widely propagated on OSNs (Zannettou et al. 2019 ). However, they are not necessarily false and may turn out to be true (Zubiaga et al. 2018 ). Rumors originate from unverified sources but may be true or false or remain unresolved (Zubiaga et al. 2018 ).

Satire refers to stories that contain a lot of irony and humor (Zannettou et al. 2019 ). It presents stories as news that might be factually incorrect, but the intent is not to deceive but rather to call out, ridicule, or to expose behavior that is shameful, corrupt, or otherwise “bad” (Golbeck et al. 2018 ). This is done with a fabricated story or by exaggerating the truth reported in mainstream media in the form of comedy (Collins et al. 2020 ). The intent behind satire seems kind of legitimate and many authors (such as Wardle (Wardle 2017 )) do include satire as a type of fake news as there is no intention to cause harm but it has the potential to mislead or fool people.

Also, Golbeck et al. ( 2018 ) mention that there is a spectrum from fake to satirical news that they found to be exploited by many fake news sites. These sites used disclaimers at the bottom of their webpages to suggest they were “satirical” even when there was nothing satirical about their articles, to protect them from accusations about being fake. The difference with a satirical form of fake news is that the authors or the host present themselves as a comedian or as an entertainer rather than a journalist informing the public (Collins et al. 2020 ). However, most audiences believed the information passed in this satirical form because the comedian usually projects news from mainstream media and frames them to suit their program (Collins et al. 2020 ).

Propaganda refers to news stories created by political entities to mislead people. It is a special instance of fabricated stories that aim to harm the interests of a particular party and, typically, has a political context (Zannettou et al. 2019 ). Propaganda was widely used during both World Wars (Collins et al. 2020 ) and during the Cold War (Zannettou et al. 2019 ). It is a consequential type of false information as it can change the course of human history (e.g., by changing the outcome of an election) (Zannettou et al. 2019 ). States are the main actors of propaganda. Recently, propaganda has been used by politicians and media organizations to support a certain position or view (Collins et al. 2020 ). Online astroturfing can be an example of the tools used for the dissemination of propaganda. It is a covert manipulation of public opinion (Peng et al. 2017 ) that aims to make it seem that many people share the same opinion about something. Astroturfing can affect different domains of interest, based on which online astroturfing can be mainly divided into political astroturfing, corporate astroturfing and astroturfing in e-commerce or online services (Mahbub et al. 2019 ). Propaganda types of fake news can be debunked with manual fact-based detection models such as the use of expert-based fact-checkers (Collins et al. 2020 ).

Framing refers to employing some aspect of reality to make content more visible, while the truth is concealed (Collins et al. 2020 ) to deceive and misguide readers. People will understand certain concepts based on the way they are coined and invented. An example of framing was provided by Collins et al. ( 2020 ): “suppose a leader X says “I will neutralize my opponent” simply meaning he will beat his opponent in a given election. Such a statement will be framed such as “leader X threatens to kill Y” and this framed statement provides a total misrepresentation of the original meaning.

Conspiracy Theories

Conspiracy theories refer to the belief that an event is the result of secret plots generated by powerful conspirators. Conspiracy belief refers to people’s adoption and belief of conspiracy theories, and it is associated with psychological, political and social factors (Douglas et al. 2019 ). Conspiracy theories are widespread in contemporary democracies (Sutton and Douglas 2020 ), and they have major consequences. For instance, lately and during the COVID-19 pandemic, conspiracy theories have been discussed from a public health perspective (Meese et al. 2020 ; Allington et al. 2020 ; Freeman et al. 2020 ).

Comparison Between Most Popular Intent-based Types of Fake News

Following a review of the most popular intent-based types of fake news, we compare them as shown in Table  5 based on the most common criteria mentioned by researchers in their definitions as listed below.

  • the intent behind the news, which refers to whether a given news type was mainly created to intentionally deceive people or not (e.g., humor, irony, entertainment, etc.);
  • the way that the news propagates through OSN, which determines the nature of the propagation of each type of fake news and this can be either fast or slow propagation;
  • the severity of the impact of the news on OSN users, which refers to whether the public has been highly impacted by the given type of fake news; the mentioned impact of each fake news type is mainly the proportion of the negative impact;
  • and the goal behind disseminating the news, which can be to gain popularity for a particular entity (e.g., political party), for profit (e.g., lucrative business), or other reasons such as humor and irony in the case of satire, spreading panic or anger, and manipulating the public in the case of hoaxes, made-up stories about a particular person or entity in the case of rumors, and misguiding readers in the case of framing.

However, the comparison provided in Table  5 is deduced from the studied research papers; it is our point of view, which is not based on empirical data.

We suspect that the most dangerous types of fake news are the ones with high intention to deceive the public, fast propagation through social media, high negative impact on OSN users, and complicated hidden goals and agendas. However, while the other types of fake news are less dangerous, they should not be ignored.

Moreover, it is important to highlight that the existence of the overlap in the types of fake news mentioned above has been proven, thus it is possible to observe false information that may fall within multiple categories (Zannettou et al. 2019 ). Here, we provide two examples by Zannettou et al. ( 2019 ) to better understand possible overlaps: (1) a rumor may also use clickbait techniques to increase the audience that will read the story; and (2) propaganda stories, as a special instance of a framing story.

Challenges related to fake news detection and mitigation

To alleviate fake news and its threats, it is crucial to first identify and understand the factors involved that continue to challenge researchers. Thus, the main question is to explore and investigate the factors that make it easier to fall for manipulated information. Despite the tremendous progress made in alleviating some of the challenges in fake news detection (Sharma et al. 2019 ; Zhou and Zafarani 2020 ; Zhang and Ghorbani 2020 ; Shu et al. 2020a ), much more work needs to be accomplished to address the problem effectively.

In this section, we discuss several open issues that have been making fake news detection in social media a challenging problem. These issues can be summarized as follows: content-based issues (i.e., deceptive content that resembles the truth very closely), contextual issues (i.e., lack of user awareness, social bots spreaders of fake content, and OSN’s dynamic natures that leads to the fast propagation), as well as the issue of existing datasets (i.e., there still no one size fits all benchmark dataset for fake news detection). These various aspects have proven (Shu et al. 2017 ) to have a great impact on the accuracy of fake news detection approaches.

Content-based issue, deceptive content

Automatic fake news detection remains a huge challenge, primarily because the content is designed in a way that it closely resembles the truth. Besides, most deceivers choose their words carefully and use their language strategically to avoid being caught. Therefore, it is often hard to determine its veracity by AI without the reliance on additional information from third parties such as fact-checkers.

Abdullah-All-Tanvir et al. ( 2020 ) reported that fake news tends to have more complicated stories and hardly ever make any references. It is more likely to contain a greater number of words that express negative emotions. This makes it so complicated that it becomes impossible for a human to manually detect the credibility of this content. Therefore, detecting fake news on social media is quite challenging. Moreover, fake news appears in multiple types and forms, which makes it hard and challenging to define a single global solution able to capture and deal with the disseminated content. Consequently, detecting false information is not a straightforward task due to its various types and forms Zannettou et al. ( 2019 ).

Contextual issues

Contextual issues are challenges that we suspect may not be related to the content of the news but rather they are inferred from the context of the online news post (i.e., humans are the weakest factor due to lack of user awareness, social bots spreaders, dynamic nature of online social platforms and fast propagation of fake news).

Humans are the weakest factor due to the lack of awareness

Recent statistics 31 show that the percentage of unintentional fake news spreaders (people who share fake news without the intention to mislead) over social media is five times higher than intentional spreaders. Moreover, another recent statistic 32 shows that the percentage of people who were confident about their ability to discern fact from fiction is ten times higher than those who were not confident about the truthfulness of what they are sharing. As a result, we can deduce the lack of human awareness about the ascent of fake news.

Public susceptibility and lack of user awareness (Sharma et al. 2019 ) have always been the most challenging problem when dealing with fake news and misinformation. This is a complex issue because many people believe almost everything on the Internet and the ones who are new to digital technology or have less expertise may be easily fooled (Edgerly et al. 2020 ).

Moreover, it has been widely proven (Metzger et al. 2020 ; Edgerly et al. 2020 ) that people are often motivated to support and accept information that goes with their preexisting viewpoints and beliefs, and reject information that does not fit in as well. Hence, Shu et al. ( 2017 ) illustrate an interesting correlation between fake news spread and psychological and cognitive theories. They further suggest that humans are more likely to believe information that confirms their existing views and ideological beliefs. Consequently, they deduce that humans are naturally not very good at differentiating real information from fake information.

Recent research by Giachanou et al. ( 2020 ) studies the role of personality and linguistic patterns in discriminating between fake news spreaders and fact-checkers. They classify a user as a potential fact-checker or a potential fake news spreader based on features that represent users’ personality traits and linguistic patterns used in their tweets. They show that leveraging personality traits and linguistic patterns can improve the performance in differentiating between checkers and spreaders.

Furthermore, several researchers studied the prevalence of fake news on social networks during (Allcott and Gentzkow 2017 ; Grinberg et al. 2019 ; Guess et al. 2019 ; Baptista and Gradim 2020 ) and after (Garrett and Bond 2021 ) the 2016 US presidential election and found that individuals most likely to engage with fake news sources were generally conservative-leaning, older, and highly engaged with political news.

Metzger et al. ( 2020 ) examine how individuals evaluate the credibility of biased news sources and stories. They investigate the role of both cognitive dissonance and credibility perceptions in selective exposure to attitude-consistent news information. They found that online news consumers tend to perceive attitude-consistent news stories as more accurate and more credible than attitude-inconsistent stories.

Similarly, Edgerly et al. ( 2020 ) explore the impact of news headlines on the audience’s intent to verify whether given news is true or false. They concluded that participants exhibit higher intent to verify the news only when they believe the headline to be true, which is predicted by perceived congruence with preexisting ideological tendencies.

Luo et al. ( 2022 ) evaluate the effects of endorsement cues in social media on message credibility and detection accuracy. Results showed that headlines associated with a high number of likes increased credibility, thereby enhancing detection accuracy for real news but undermining accuracy for fake news. Consequently, they highlight the urgency of empowering individuals to assess both news veracity and endorsement cues appropriately on social media.

Moreover, misinformed people are a greater problem than uninformed people (Kuklinski et al. 2000 ), because the former hold inaccurate opinions (which may concern politics, climate change, medicine) that are harder to correct. Indeed, people find it difficult to update their misinformation-based beliefs even after they have been proved to be false (Flynn et al. 2017 ). Moreover, even if a person has accepted the corrected information, his/her belief may still affect their opinion (Nyhan and Reifler 2015 ).

Falling for disinformation may also be explained by a lack of critical thinking and of the need for evidence that supports information (Vilmer et al. 2018 ; Badawy et al. 2019 ). However, it is also possible that people choose misinformation because they engage in directionally motivated reasoning (Badawy et al. 2019 ; Flynn et al. 2017 ). Online clients are normally vulnerable and will, in general, perceive web-based networking media as reliable, as reported by Abdullah-All-Tanvir et al. ( 2019 ), who propose to mechanize fake news recognition.

It is worth noting that in addition to bots causing the outpouring of the majority of the misrepresentations, specific individuals are also contributing a large share of this issue (Abdullah-All-Tanvir et al. 2019 ). Furthermore, Vosoughi et al. (Vosoughi et al. 2018 ) found that contrary to conventional wisdom, robots have accelerated the spread of real and fake news at the same rate, implying that fake news spreads more than the truth because humans, not robots, are more likely to spread it.

In this case, verified users and those with numerous followers were not necessarily responsible for spreading misinformation of the corrupted posts (Abdullah-All-Tanvir et al. 2019 ).

Viral fake news can cause much havoc to our society. Therefore, to mitigate the negative impact of fake news, it is important to analyze the factors that lead people to fall for misinformation and to further understand why people spread fake news (Cheng et al. 2020 ). Measuring the accuracy, credibility, veracity and validity of news contents can also be a key countermeasure to consider.

Social bots spreaders

Several authors (Shu et al. 2018b , 2017 ; Shi et al. 2019 ; Bessi and Ferrara 2016 ; Shao et al. 2018a ) have also shown that fake news is likely to be created and spread by non-human accounts with similar attributes and structure in the network, such as social bots (Ferrara et al. 2016 ). Bots (short for software robots) exist since the early days of computers. A social bot is a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior (Ferrara et al. 2016 ). Although they are designed to provide a useful service, they can be harmful, for example when they contribute to the spread of unverified information or rumors (Ferrara et al. 2016 ). However, it is important to note that bots are simply tools created and maintained by humans for some specific hidden agendas.

Social bots tend to connect with legitimate users instead of other bots. They try to act like a human with fewer words and fewer followers on social media. This contributes to the forwarding of fake news (Jiang et al. 2019 ). Moreover, there is a difference between bot-generated and human-written clickbait (Le et al. 2019 ).

Many researchers have addressed ways of identifying and analyzing possible sources of fake news spread in social media. Recent research by Shu et al. ( 2020a ) describes social bots use of two strategies to spread low-credibility content. First, they amplify interactions with content as soon as it is created to make it look legitimate and to facilitate its spread across social networks. Next, they try to increase public exposure to the created content and thus boost its perceived credibility by targeting influential users that are more likely to believe disinformation in the hope of getting them to “repost” the fabricated content. They further discuss the social bot detection systems taxonomy proposed by Ferrara et al. ( 2016 ) which divides bot detection methods into three classes: (1) graph-based, (2) crowdsourcing and (3) feature-based social bot detection methods.

Similarly, Shao et al. ( 2018a ) examine social bots and how they promote the spread of misinformation through millions of Twitter posts during and following the 2016 US presidential campaign. They found that social bots played a disproportionate role in spreading articles from low-credibility sources by amplifying such content in the early spreading moments and targeting users with many followers through replies and mentions to expose them to this content and induce them to share it.

Ismailov et al. ( 2020 ) assert that the techniques used to detect bots depend on the social platform and the objective. They note that a malicious bot designed to make friends with as many accounts as possible will require a different detection approach than a bot designed to repeatedly post links to malicious websites. Therefore, they identify two models for detecting malicious accounts, each using a different set of features. Social context models achieve detection by examining features related to an account’s social presence including features such as relationships to other accounts, similarities to other users’ behaviors, and a variety of graph-based features. User behavior models primarily focus on features related to an individual user’s behavior, such as frequency of activities (e.g., number of tweets or posts per time interval), patterns of activity and clickstream sequences.

Therefore, it is crucial to consider bot detection techniques to distinguish bots from normal users to better leverage user profile features to detect fake news.

However, there is also another “bot-like” strategy that aims to massively promote disinformation and fake content in social platforms, which is called bot farms or also troll farms. It is not social bots, but it is a group of organized individuals engaging in trolling or bot-like promotion of narratives in a coordinated fashion (Wardle 2018 ) hired to massively spread fake news or any other harmful content. A prominent troll farm example is the Russia-based Internet Research Agency (IRA), which disseminated inflammatory content online to influence the outcome of the 2016 U.S. presidential election. 33 As a result, Twitter suspended accounts connected to the IRA and deleted 200,000 tweets from Russian trolls (Jamieson 2020 ). Another example to mention in this category is review bombing (Moro and Birt 2022 ). Review bombing refers to coordinated groups of people massively performing the same negative actions online (e.g., dislike, negative review/comment) on an online video, game, post, product, etc., in order to reduce its aggregate review score. The review bombers can be both humans and bots coordinated in order to cause harm and mislead people by falsifying facts.

Dynamic nature of online social platforms and fast propagation of fake news

Sharma et al. ( 2019 ) affirm that the fast proliferation of fake news through social networks makes it hard and challenging to assess the information’s credibility on social media. Similarly, Qian et al. ( 2018 ) assert that fake news and fabricated content propagate exponentially at the early stage of its creation and can cause a significant loss in a short amount of time (Friggeri et al. 2014 ) including manipulating the outcome of political events (Liu and Wu 2018 ; Bessi and Ferrara 2016 ).

Moreover, while analyzing the way source and promoters of fake news operate over the web through multiple online platforms, Zannettou et al. ( 2019 ) discovered that false information is more likely to spread across platforms (18% appearing on multiple platforms) compared to real information (11%).

Furthermore, recently, Shu et al. ( 2020c ) attempted to understand the propagation of disinformation and fake news in social media and found that such content is produced and disseminated faster and easier through social media because of the low barriers that prevent doing so. Similarly, Shu et al. ( 2020b ) studied hierarchical propagation networks for fake news detection. They performed a comparative analysis between fake and real news from structural, temporal and linguistic perspectives. They demonstrated the potential of using these features to detect fake news and they showed their effectiveness for fake news detection as well.

Lastly, Abdullah-All-Tanvir et al. ( 2020 ) note that it is almost impossible to manually detect the sources and authenticity of fake news effectively and efficiently, due to its fast circulation in such a small amount of time. Therefore, it is crucial to note that the dynamic nature of the various online social platforms, which results in the continued rapid and exponential propagation of such fake content, remains a major challenge that requires further investigation while defining innovative solutions for fake news detection.

Datasets issue

The existing approaches lack an inclusive dataset with derived multidimensional information to detect fake news characteristics to achieve higher accuracy of machine learning classification model performance (Nyow and Chua 2019 ). These datasets are primarily dedicated to validating the machine learning model and are the ultimate frame of reference to train the model and analyze its performance. Therefore, if a researcher evaluates their model based on an unrepresentative dataset, the validity and the efficiency of the model become questionable when it comes to applying the fake news detection approach in a real-world scenario.

Moreover, several researchers (Shu et al. 2020d ; Wang et al. 2020 ; Pathak and Srihari 2019 ; Przybyla 2020 ) believe that fake news is diverse and dynamic in terms of content, topics, publishing methods and media platforms, and sophisticated linguistic styles geared to emulate true news. Consequently, training machine learning models on such sophisticated content requires large-scale annotated fake news data that are difficult to obtain (Shu et al. 2020d ).

Therefore, datasets are also a great topic to work on to enhance data quality and have better results while defining our solutions. Adversarial learning techniques (e.g., GAN, SeqGAN) can be used to provide machine-generated data that can be used to train deeper models and build robust systems to detect fake examples from the real ones. This approach can be used to counter the lack of datasets and the scarcity of data available to train models.

Fake news detection literature review

Fake news detection in social networks is still in the early stage of development and there are still challenging issues that need further investigation. This has become an emerging research area that is attracting huge attention.

There are various research studies on fake news detection in online social networks. Few of them have focused on the automatic detection of fake news using artificial intelligence techniques. In this section, we review the existing approaches used in automatic fake news detection, as well as the techniques that have been adopted. Then, a critical discussion built on a primary classification scheme based on a specific set of criteria is also emphasized.

Categories of fake news detection

In this section, we give an overview of most of the existing automatic fake news detection solutions adopted in the literature. A recent classification by Sharma et al. ( 2019 ) uses three categories of fake news identification methods. Each category is further divided based on the type of existing methods (i.e., content-based, feedback-based and intervention-based methods). However, a review of the literature for fake news detection in online social networks shows that the existing studies can be classified into broader categories based on two major aspects that most authors inspect and make use of to define an adequate solution. These aspects can be considered as major sources of extracted information used for fake news detection and can be summarized as follows: the content-based (i.e., related to the content of the news post) and the contextual aspect (i.e., related to the context of the news post).

Consequently, the studies we reviewed can be classified into three different categories based on the two aspects mentioned above (the third category is hybrid). As depicted in Fig.  5 , fake news detection solutions can be categorized as news content-based approaches, the social context-based approaches that can be divided into network and user-based approaches, and hybrid approaches. The latter combines both content-based and contextual approaches to define the solution.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig5_HTML.jpg

Classification of fake news detection approaches

News Content-based Category

News content-based approaches are fake news detection approaches that use content information (i.e., information extracted from the content of the news post) and that focus on studying and exploiting the news content in their proposed solutions. Content refers to the body of the news, including source, headline, text and image-video, which can reflect subtle differences.

Researchers of this category rely on content-based detection cues (i.e., text and multimedia-based cues), which are features extracted from the content of the news post. Text-based cues are features extracted from the text of the news, whereas multimedia-based cues are features extracted from the images and videos attached to the news. Figure  6 summarizes the most widely used news content representation (i.e., text and multimedia/images) and detection techniques (i.e., machine learning (ML), deep Learning (DL), natural language processing (NLP), fact-checking, crowdsourcing (CDS) and blockchain (BKC)) in news content-based category of fake news detection approaches. Most of the reviewed research works based on news content for fake news detection rely on the text-based cues (Kapusta et al. 2019 ; Kaur et al. 2020 ; Vereshchaka et al. 2020 ; Ozbay and Alatas 2020 ; Wang 2017 ; Nyow and Chua 2019 ; Hosseinimotlagh and Papalexakis 2018 ; Abdullah-All-Tanvir et al. 2019 , 2020 ; Mahabub 2020 ; Bahad et al. 2019 ; Hiriyannaiah et al. 2020 ) extracted from the text of the news content including the body of the news and its headline. However, a few researchers such as Vishwakarma et al. ( 2019 ) and Amri et al. ( 2022 ) try to recognize text from the associated image.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig6_HTML.jpg

News content-based category: news content representation and detection techniques

Most researchers of this category rely on artificial intelligence (AI) techniques (such as ML, DL and NLP models) to improve performance in terms of prediction accuracy. Others use different techniques such as fact-checking, crowdsourcing and blockchain. Specifically, the AI- and ML-based approaches in this category are trying to extract features from the news content, which they use later for content analysis and training tasks. In this particular case, the extracted features are the different types of information considered to be relevant for the analysis. Feature extraction is considered as one of the best techniques to reduce data size in automatic fake news detection. This technique aims to choose a subset of features from the original set to improve classification performance (Yazdi et al. 2020 ).

Table  6 lists the distinct features and metadata, as well as the used datasets in the news content-based category of fake news detection approaches.

The features and datasets used in the news content-based approaches

a https://www.kaggle.com/anthonyc1/gathering-real-news-for-oct-dec-2016 , last access date: 26-12-2022

b https://mediabiasfactcheck.com/ , last access date: 26-12-2022

c https://github.com/KaiDMML/FakeNewsNet , last access date: 26-12-2022

d https://www.kaggle.com/anthonyc1/gathering-real-news-for-oct-dec-2016 , last access date: 26-12-2022

e https://www.cs.ucsb.edu/~william/data/liar_dataset.zip , last access date: 26-12-2022

f https://www.kaggle.com/mrisdal/fake-news , last access date: 26-12-2022

g https://github.com/BuzzFeedNews/2016-10-facebook-fact-check , last access date: 26-12-2022

h https://www.politifact.com/subjects/fake-news/ , last access date: 26-12-2022

i https://www.kaggle.com/rchitic17/real-or-fake , last access date: 26-12-2022

j https://www.kaggle.com/jruvika/fake-news-detection , last access date: 26-12-2022

k https://github.com/MKLab-ITI/image-verification-corpus , last access date: 26-12-2022

l https://drive.google.com/file/d/14VQ7EWPiFeGzxp3XC2DeEHi-BEisDINn/view , last access date: 26-12-2022

Social Context-based Category

Unlike news content-based solutions, the social context-based approaches capture the skeptical social context of the online news (Zhang and Ghorbani 2020 ) rather than focusing on the news content. The social context-based category contains fake news detection approaches that use the contextual aspects (i.e., information related to the context of the news post). These aspects are based on social context and they offer additional information to help detect fake news. They are the surrounding data outside of the fake news article itself, where they can be an essential part of automatic fake news detection. Some useful examples of contextual information may include checking if the news itself and the source that published it are credible, checking the date of the news or the supporting resources, and checking if any other online news platforms are reporting the same or similar stories (Zhang and Ghorbani 2020 ).

Social context-based aspects can be classified into two subcategories, user-based and network-based, and they can be used for context analysis and training tasks in the case of AI- and ML-based approaches. User-based aspects refer to information captured from OSN users such as user profile information (Shu et al. 2019b ; Wang et al. 2019c ; Hamdi et al. 2020 ; Nyow and Chua 2019 ; Jiang et al. 2019 ) and user behavior (Cardaioli et al. 2020 ) such as user engagement (Uppada et al. 2022 ; Jiang et al. 2019 ; Shu et al. 2018b ; Nyow and Chua 2019 ) and response (Zhang et al. 2019a ; Qian et al. 2018 ). Meanwhile, network-based aspects refer to information captured from the properties of the social network where the fake content is shared and disseminated such as news propagation path (Liu and Wu 2018 ; Wu and Liu 2018 ) (e.g., propagation times and temporal characteristics of propagation), diffusion patterns (Shu et al. 2019a ) (e.g., number of retweets, shares), as well as user relationships (Mishra 2020 ; Hamdi et al. 2020 ; Jiang et al. 2019 ) (e.g., friendship status among users).

Figure  7 summarizes some of the most widely adopted social context representations, as well as the most used detection techniques (i.e., AI, ML, DL, fact-checking and blockchain), in the social context-based category of approaches.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig7_HTML.jpg

Social context-based category: social context representation and detection techniques

Table  7 lists the distinct features and metadata, the adopted detection cues, as well as the used datasets, in the context-based category of fake news detection approaches.

The features, detection cues and datasets used int the social context-based approaches

a https://www.dropbox.com/s/7ewzdrbelpmrnxu/rumdetect2017.zip , last access date: 26-12-2022 b https://snap.stanford.edu/data/ego-Twitter.html , last access date: 26-12-2022

Hybrid approaches

Most researchers are focusing on employing a specific method rather than a combination of both content- and context-based methods. This is because some of them (Wu and Rao 2020 ) believe that there still some challenging limitations in the traditional fusion strategies due to existing feature correlations and semantic conflicts. For this reason, some researchers focus on extracting content-based information, while others are capturing some social context-based information for their proposed approaches.

However, it has proven challenging to successfully automate fake news detection based on just a single type of feature (Ruchansky et al. 2017 ). Therefore, recent directions tend to do a mixture by using both news content-based and social context-based approaches for fake news detection.

Table  8 lists the distinct features and metadata, as well as the used datasets, in the hybrid category of fake news detection approaches.

The features and datasets used in the hybrid approaches

Fake news detection techniques

Another vision for classifying automatic fake news detection is to look at techniques used in the literature. Hence, we classify the detection methods based on the techniques into three groups:

  • Human-based techniques: This category mainly includes the use of crowdsourcing and fact-checking techniques, which rely on human knowledge to check and validate the veracity of news content.
  • Artificial Intelligence-based techniques: This category includes the most used AI approaches for fake news detection in the literature. Specifically, these are the approaches in which researchers use classical ML, deep learning techniques such as convolutional neural network (CNN), recurrent neural network (RNN), as well as natural language processing (NLP).
  • Blockchain-based techniques: This category includes solutions using blockchain technology to detect and mitigate fake news in social media by checking source reliability and establishing the traceability of the news content.

Human-based Techniques

One specific research direction for fake news detection consists of using human-based techniques such as crowdsourcing (Pennycook and Rand 2019 ; Micallef et al. 2020 ) and fact-checking (Vlachos and Riedel 2014 ; Chung and Kim 2021 ; Nyhan et al. 2020 ) techniques.

These approaches can be considered as low computational requirement techniques since both rely on human knowledge and expertise for fake news detection. However, fake news identification cannot be addressed solely through human force since it demands a lot of effort in terms of time and cost, and it is ineffective in terms of preventing the fast spread of fake content.

Crowdsourcing. Crowdsourcing approaches (Kim et al. 2018 ) are based on the “wisdom of the crowds” (Collins et al. 2020 ) for fake content detection. These approaches rely on the collective contributions and crowd signals (Tschiatschek et al. 2018 ) of a group of people for the aggregation of crowd intelligence to detect fake news (Tchakounté et al. 2020 ) and to reduce the spread of misinformation on social media (Pennycook and Rand 2019 ; Micallef et al. 2020 ).

Micallef et al. ( 2020 ) highlight the role of the crowd in countering misinformation. They suspect that concerned citizens (i.e., the crowd), who use platforms where disinformation appears, can play a crucial role in spreading fact-checking information and in combating the spread of misinformation.

Recently Tchakounté et al. ( 2020 ) proposed a voting system as a new method of binary aggregation of opinions of the crowd and the knowledge of a third-party expert. The aggregator is based on majority voting on the crowd side and weighted averaging on the third-party site.

Similarly, Huffaker et al. ( 2020 ) propose a crowdsourced detection of emotionally manipulative language. They introduce an approach that transforms classification problems into a comparison task to mitigate conflation content by allowing the crowd to detect text that uses manipulative emotional language to sway users toward positions or actions. The proposed system leverages anchor comparison to distinguish between intrinsically emotional content and emotionally manipulative language.

La Barbera et al. ( 2020 ) try to understand how people perceive the truthfulness of information presented to them. They collect data from US-based crowd workers, build a dataset of crowdsourced truthfulness judgments for political statements, and compare it with expert annotation data generated by fact-checkers such as PolitiFact.

Coscia and Rossi ( 2020 ) introduce a crowdsourced flagging system that consists of online news flagging. The bipolar model of news flagging attempts to capture the main ingredients that they observe in empirical research on fake news and disinformation.

Unlike the previously mentioned researchers who focus on news content in their approaches, Pennycook and Rand ( 2019 ) focus on using crowdsourced judgments of the quality of news sources to combat social media disinformation.

Fact-Checking. The fact-checking task is commonly manually performed by journalists to verify the truthfulness of a given claim. Indeed, fact-checking features are being adopted by multiple online social network platforms. For instance, Facebook 34 started addressing false information through independent fact-checkers in 2017, followed by Google 35 the same year. Two years later, Instagram 36 followed suit. However, the usefulness of fact-checking initiatives is questioned by journalists 37 , as well as by researchers such as Andersen and Søe ( 2020 ). On the other hand, work is being conducted to boost the effectiveness of these initiatives to reduce misinformation (Chung and Kim 2021 ; Clayton et al. 2020 ; Nyhan et al. 2020 ).

Most researchers use fact-checking websites (e.g., politifact.com, 38 snopes.com, 39 Reuters, 40 , etc.) as data sources to build their datasets and train their models. Therefore, in the following, we specifically review examples of solutions that use fact-checking (Vlachos and Riedel 2014 ) to help build datasets that can be further used in the automatic detection of fake content.

Yang et al. ( 2019a ) use PolitiFact fact-checking website as a data source to train, tune, and evaluate their model named XFake, on political data. The XFake system is an explainable fake news detector that assists end users to identify news credibility. The fakeness of news items is detected and interpreted considering both content and contextual (e.g., statements) information (e.g., speaker).

Based on the idea that fact-checkers cannot clean all data, and it must be a selection of what “matters the most” to clean while checking a claim, Sintos et al. ( 2019 ) propose a solution to help fact-checkers combat problems related to data quality (where inaccurate data lead to incorrect conclusions) and data phishing. The proposed solution is a combination of data cleaning and perturbation analysis to avoid uncertainties and errors in data and the possibility that data can be phished.

Tchechmedjiev et al. ( 2019 ) propose a system named “ClaimsKG” as a knowledge graph of fact-checked claims aiming to facilitate structured queries about their truth values, authors, dates, journalistic reviews and other kinds of metadata. “ClaimsKG” designs the relationship between vocabularies. To gather vocabularies, a semi-automated pipeline periodically gathers data from popular fact-checking websites regularly.

AI-based Techniques

Previous work by Yaqub et al. ( 2020 ) has shown that people lack trust in automated solutions for fake news detection However, work is already being undertaken to increase this trust, for instance by von der Weth et al. ( 2020 ).

Most researchers consider fake news detection as a classification problem and use artificial intelligence techniques, as shown in Fig.  8 . The adopted AI techniques may include machine learning ML (e.g., Naïve Bayes, logistic regression, support vector machine SVM), deep learning DL (e.g., convolutional neural networks CNN, recurrent neural networks RNN, long short-term memory LSTM) and natural language processing NLP (e.g., Count vectorizer, TF-IDF Vectorizer). Most of them combine many AI techniques in their solutions rather than relying on one specific approach.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig8_HTML.jpg

Examples of the most widely used AI techniques for fake news detection

Many researchers are developing machine learning models in their solutions for fake news detection. Recently, deep neural network techniques are also being employed as they are generating promising results (Islam et al. 2020 ). A neural network is a massively parallel distributed processor with simple units that can store important information and make it available for use (Hiriyannaiah et al. 2020 ). Moreover, it has been proven (Cardoso Durier da Silva et al. 2019 ) that the most widely used method for automatic detection of fake news is not simply a classical machine learning technique, but rather a fusion of classical techniques coordinated by a neural network.

Some researchers define purely machine learning models (Del Vicario et al. 2019 ; Elhadad et al. 2019 ; Aswani et al. 2017 ; Hakak et al. 2021 ; Singh et al. 2021 ) in their fake news detection approaches. The more commonly used machine learning algorithms (Abdullah-All-Tanvir et al. 2019 ) for classification problems are Naïve Bayes, logistic regression and SVM.

Other researchers (Wang et al. 2019c ; Wang 2017 ; Liu and Wu 2018 ; Mishra 2020 ; Qian et al. 2018 ; Zhang et al. 2020 ; Goldani et al. 2021 ) prefer to do a mixture of different deep learning models, without combining them with classical machine learning techniques. Some even prove that deep learning techniques outperform traditional machine learning techniques (Mishra et al. 2022 ). Deep learning is one of the most widely popular research topics in machine learning. Unlike traditional machine learning approaches, which are based on manually crafted features, deep learning approaches can learn hidden representations from simpler inputs both in context and content variations (Bondielli and Marcelloni 2019 ). Moreover, traditional machine learning algorithms almost always require structured data and are designed to “learn” to act by understanding labeled data and then use it to produce new results with more datasets, which requires human intervention to “teach them” when the result is incorrect (Parrish 2018 ), while deep learning networks rely on layers of artificial neural networks (ANN) and do not require human intervention, as multilevel layers in neural networks place data in a hierarchy of different concepts, which ultimately learn from their own mistakes (Parrish 2018 ). The two most widely implemented paradigms in deep neural networks are recurrent neural networks (RNN) and convolutional neural networks (CNN).

Still other researchers (Abdullah-All-Tanvir et al. 2019 ; Kaliyar et al. 2020 ; Zhang et al. 2019a ; Deepak and Chitturi 2020 ; Shu et al. 2018a ; Wang et al. 2019c ) prefer to combine traditional machine learning and deep learning classification, models. Others combine machine learning and natural language processing techniques. A few combine deep learning models with natural language processing (Vereshchaka et al. 2020 ). Some other researchers (Kapusta et al. 2019 ; Ozbay and Alatas 2020 ; Ahmed et al. 2020 ) combine natural language processing with machine learning models. Furthermore, others (Abdullah-All-Tanvir et al. 2019 ; Kaur et al. 2020 ; Kaliyar 2018 ; Abdullah-All-Tanvir et al. 2020 ; Bahad et al. 2019 ) prefer to combine all the previously mentioned techniques (i.e., ML, DL and NLP) in their approaches.

Table  11 , which is relegated to the Appendix (after the bibliography) because of its size, shows a comparison of the fake news detection solutions that we have reviewed based on their main approaches, the methodology that was used and the models.

Comparison of AI-based fake news detection techniques

Blockchain-based Techniques for Source Reliability and Traceability

Another research direction for detecting and mitigating fake news in social media focuses on using blockchain solutions. Blockchain technology is recently attracting researchers’ attention due to the interesting features it offers. Immutability, decentralization, tamperproof, consensus, record keeping and non-repudiation of transactions are some of the key features that make blockchain technology exploitable, not just for cryptocurrencies, but also to prove the authenticity and integrity of digital assets.

However, the proposed blockchain approaches are few in number and they are fundamental and theoretical approaches. Specifically, the solutions that are currently available are still in research, prototype, and beta testing stages (DiCicco and Agarwal 2020 ; Tchechmedjiev et al. 2019 ). Furthermore, most researchers (Ochoa et al. 2019 ; Song et al. 2019 ; Shang et al. 2018 ; Qayyum et al. 2019 ; Jing and Murugesan 2018 ; Buccafurri et al. 2017 ; Chen et al. 2018 ) do not specify which fake news type they are mitigating in their studies. They mention news content in general, which is not adequate for innovative solutions. For that, serious implementations should be provided to prove the usefulness and feasibility of this newly developing research vision.

Table  9 shows a classification of the reviewed blockchain-based approaches. In the classification, we listed the following:

  • The type of fake news that authors are trying to mitigate, which can be multimedia-based or text-based fake news.
  • The techniques used for fake news mitigation, which can be either blockchain only, or blockchain combined with other techniques such as AI, Data mining, Truth-discovery, Preservation metadata, Semantic similarity, Crowdsourcing, Graph theory and SIR model (Susceptible, Infected, Recovered).
  • The feature that is offered as an advantage of the given solution (e.g., Reliability, Authenticity and Traceability). Reliability is the credibility and truthfulness of the news content, which consists of proving the trustworthiness of the content. Traceability aims to trace and archive the contents. Authenticity consists of checking whether the content is real and authentic.

A checkmark ( ✓ ) in Table  9 denotes that the mentioned criterion is explicitly mentioned in the proposed solution, while the empty dash (–) cell for fake news type denotes that it depends on the case: The criterion was either not explicitly mentioned (e.g., fake news type) in the work or the classification does not apply (e.g., techniques/other).

A classification of popular blockchain-based approaches for fake news detection in social media

After reviewing the most relevant state of the art for automatic fake news detection, we classify them as shown in Table  10 based on the detection aspects (i.e., content-based, contextual, or hybrid aspects) and the techniques used (i.e., AI, crowdsourcing, fact-checking, blockchain or hybrid techniques). Hybrid techniques refer to solutions that simultaneously combine different techniques from previously mentioned categories (i.e., inter-hybrid methods), as well as techniques within the same class of methods (i.e., intra-hybrid methods), in order to define innovative solutions for fake news detection. A hybrid method should bring the best of both worlds. Then, we provide a discussion based on different axes.

Fake news detection approaches classification

News content-based methods

Most of the news content-based approaches consider fake news detection as a classification problem and they use AI techniques such as classical machine learning (e.g., regression, Bayesian) as well as deep learning (i.e., neural methods such as CNN and RNN). More specifically, classification of social media content is a fundamental task for social media mining, so that most existing methods regard it as a text categorization problem and mainly focus on using content features, such as words and hashtags (Wu and Liu 2018 ). The main challenge facing these approaches is how to extract features in a way to reduce the data used to train their models and what features are the most suitable for accurate results.

Researchers using such approaches are motivated by the fact that the news content is the main entity in the deception process, and it is a straightforward factor to analyze and use while looking for predictive clues of deception. However, detecting fake news only from the content of the news is not enough because the news is created in a strategic intentional way to mimic the truth (i.e., the content can be intentionally manipulated by the spreader to make it look like real news). Therefore, it is considered to be challenging, if not impossible, to identify useful features (Wu and Liu 2018 ) and consequently tell the nature of such news solely from the content.

Moreover, works that utilize only the news content for fake news detection ignore the rich information and latent user intelligence (Qian et al. 2018 ) stored in user responses toward previously disseminated articles. Therefore, the auxiliary information is deemed crucial for an effective fake news detection approach.

Social context-based methods

The context-based approaches explore the surrounding data outside of the news content, which can be an effective direction and has some advantages in areas where the content approaches based on text classification can run into issues. However, most existing studies implementing contextual methods mainly focus on additional information coming from users and network diffusion patterns. Moreover, from a technical perspective, they are limited to the use of sophisticated machine learning techniques for feature extraction, and they ignore the usefulness of results coming from techniques such as web search and crowdsourcing which may save much time and help in the early detection and identification of fake content.

Hybrid approaches can simultaneously model different aspects of fake news such as the content-based aspects, as well as the contextual aspect based on both the OSN user and the OSN network patterns. However, these approaches are deemed more complex in terms of models (Bondielli and Marcelloni 2019 ), data availability, and the number of features. Furthermore, it remains difficult to decide which information among each category (i.e., content-based and context-based information) is most suitable and appropriate to be used to achieve accurate and precise results. Therefore, there are still very few studies belonging to this category of hybrid approaches.

Early detection

As fake news usually evolves and spreads very fast on social media, it is critical and urgent to consider early detection directions. Yet, this is a challenging task to do especially in highly dynamic platforms such as social networks. Both news content- and social context-based approaches suffer from this challenging early detection of fake news.

Although approaches that detect fake news based on content analysis face this issue less, they are still limited by the lack of information required for verification when the news is in its early stage of spread. However, approaches that detect fake news based on contextual analysis are most likely to suffer from the lack of early detection since most of them rely on information that is mostly available after the spread of fake content such as social engagement, user response, and propagation patterns. Therefore, it is crucial to consider both trusted human verification and historical data as an attempt to detect fake content during its early stage of propagation.

Conclusion and future directions

In this paper, we introduced the general context of the fake news problem as one of the major issues of the online deception problem in online social networks. Based on reviewing the most relevant state of the art, we summarized and classified existing definitions of fake news, as well as its related terms. We also listed various typologies and existing categorizations of fake news such as intent-based fake news including clickbait, hoax, rumor, satire, propaganda, conspiracy theories, framing as well as content-based fake news including text and multimedia-based fake news, and in the latter, we can tackle deepfake videos and GAN-generated fake images. We discussed the major challenges related to fake news detection and mitigation in social media including the deceptiveness nature of the fabricated content, the lack of human awareness in the field of fake news, the non-human spreaders issue (e.g., social bots), the dynamicity of such online platforms, which results in a fast propagation of fake content and the quality of existing datasets, which still limits the efficiency of the proposed solutions. We reviewed existing researchers’ visions regarding the automatic detection of fake news based on the adopted approaches (i.e., news content-based approaches, social context-based approaches, or hybrid approaches) and the techniques that are used (i.e., artificial intelligence-based methods; crowdsourcing, fact-checking, and blockchain-based methods; and hybrid methods), then we showed a comparative study between the reviewed works. We also provided a critical discussion of the reviewed approaches based on different axes such as the adopted aspect for fake news detection (i.e., content-based, contextual, and hybrid aspects) and the early detection perspective.

To conclude, we present the main issues for combating the fake news problem that needs to be further investigated while proposing new detection approaches. We believe that to define an efficient fake news detection approach, we need to consider the following:

  • Our choice of sources of information and search criteria may have introduced biases in our research. If so, it would be desirable to identify those biases and mitigate them.
  • News content is the fundamental source to find clues to distinguish fake from real content. However, contextual information derived from social media users and from the network can provide useful auxiliary information to increase detection accuracy. Specifically, capturing users’ characteristics and users’ behavior toward shared content can be a key task for fake news detection.
  • Moreover, capturing users’ historical behavior, including their emotions and/or opinions toward news content, can help in the early detection and mitigation of fake news.
  • Furthermore, adversarial learning techniques (e.g., GAN, SeqGAN) can be considered as a promising direction for mitigating the lack and scarcity of available datasets by providing machine-generated data that can be used to train and build robust systems to detect the fake examples from the real ones.
  • Lastly, analyzing how sources and promoters of fake news operate over the web through multiple online platforms is crucial; Zannettou et al. ( 2019 ) discovered that false information is more likely to spread across platforms (18% appearing on multiple platforms) compared to valid information (11%).

Appendix: A Comparison of AI-based fake news detection techniques

This Appendix consists only in the rather long Table  11 . It shows a comparison of the fake news detection solutions based on artificial intelligence that we have reviewed according to their main approaches, the methodology that was used, and the models, as explained in Sect.  6.2.2 .

Author Contributions

The order of authors is alphabetic as is customary in the third author’s field. The lead author was Sabrine Amri, who collected and analyzed the data and wrote a first draft of the paper, all along under the supervision and tight guidance of Esma Aïmeur. Gilles Brassard reviewed, criticized and polished the work into its final form.

This work is supported in part by Canada’s Natural Sciences and Engineering Research Council.

Availability of data and material

Declarations.

On behalf of all authors, the corresponding author states that there is no conflict of interest.

1 https://www.nationalacademies.org/news/2021/07/as-surgeon-general-urges-whole-of-society-effort-to-fight-health-misinformation-the-work-of-the-national-academies-helps-foster-an-evidence-based-information-environment , last access date: 26-12-2022.

2 https://time.com/4897819/elvis-presley-alive-conspiracy-theories/ , last access date: 26-12-2022.

3 https://www.therichest.com/shocking/the-evidence-15-reasons-people-think-the-earth-is-flat/ , last access date: 26-12-2022.

4 https://www.grunge.com/657584/the-truth-about-1952s-alien-invasion-of-washington-dc/ , last access date: 26-12-2022.

5 https://www.journalism.org/2021/01/12/news-use-across-social-media-platforms-in-2020/ , last access date: 26-12-2022.

6 https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/ , last access date: 26-12-2022.

7 https://www.buzzfeednews.com/article/janelytvynenko/coronavirus-fake-news-disinformation-rumors-hoaxes , last access date: 26-12-2022.

8 https://www.factcheck.org/2020/03/viral-social-media-posts-offer-false-coronavirus-tips/ , last access date: 26-12-2022.

9 https://www.factcheck.org/2020/02/fake-coronavirus-cures-part-2-garlic-isnt-a-cure/ , last access date: 26-12-2022.

10 https://www.bbc.com/news/uk-36528256 , last access date: 26-12-2022.

11 https://en.wikipedia.org/wiki/Pizzagate_conspiracy_theory , last access date: 26-12-2022.

12 https://www.theguardian.com/world/2017/jan/09/germany-investigating-spread-fake-news-online-russia-election , last access date: 26-12-2022.

13 https://www.macquariedictionary.com.au/resources/view/word/of/the/year/2016 , last access date: 26-12-2022.

14 https://www.macquariedictionary.com.au/resources/view/word/of/the/year/2018 , last access date: 26-12-2022.

15 https://apnews.com/article/47466c5e260149b1a23641b9e319fda6 , last access date: 26-12-2022.

16 https://blog.collinsdictionary.com/language-lovers/collins-2017-word-of-the-year-shortlist/ , last access date: 26-12-2022.

17 https://www.gartner.com/smarterwithgartner/gartner-top-strategic-predictions-for-2018-and-beyond/ , last access date: 26-12-2022.

18 https://www.technologyreview.com/s/612236/even-the-best-ai-for-spotting-fake-news-is-still-terrible/ , last access date: 26-12-2022.

19 https://scholar.google.ca/ , last access date: 26-12-2022.

20 https://ieeexplore.ieee.org/ , last access date: 26-12-2022.

21 https://link.springer.com/ , last access date: 26-12-2022.

22 https://www.sciencedirect.com/ , last access date: 26-12-2022.

23 https://www.scopus.com/ , last access date: 26-12-2022.

24 https://www.acm.org/digital-library , last access date: 26-12-2022.

25 https://www.politico.com/magazine/story/2016/12/fake-news-history-long-violent-214535 , last access date: 26-12-2022.

26 https://en.wikipedia.org/wiki/Trial_of_Socrates , last access date: 26-12-2022.

27 https://trends.google.com/trends/explore?hl=en-US &tz=-180 &date=2013-12-06+2018-01-06 &geo=US &q=fake+news &sni=3 , last access date: 26-12-2022.

28 https://ec.europa.eu/digital-single-market/en/tackling-online-disinformation , last access date: 26-12-2022.

29 https://www.nato.int/cps/en/natohq/177273.htm , last access date: 26-12-2022.

30 https://www.collinsdictionary.com/dictionary/english/fake-news , last access date: 26-12-2022.

31 https://www.statista.com/statistics/657111/fake-news-sharing-online/ , last access date: 26-12-2022.

32 https://www.statista.com/statistics/657090/fake-news-recogition-confidence/ , last access date: 26-12-2022.

33 https://www.nbcnews.com/tech/social-media/now-available-more-200-000-deleted-russian-troll-tweets-n844731 , last access date: 26-12-2022.

34 https://www.theguardian.com/technology/2017/mar/22/facebook-fact-checking-tool-fake-news , last access date: 26-12-2022.

35 https://www.theguardian.com/technology/2017/apr/07/google-to-display-fact-checking-labels-to-show-if-news-is-true-or-false , last access date: 26-12-2022.

36 https://about.instagram.com/blog/announcements/combatting-misinformation-on-instagram , last access date: 26-12-2022.

37 https://www.wired.com/story/instagram-fact-checks-who-will-do-checking/ , last access date: 26-12-2022.

38 https://www.politifact.com/ , last access date: 26-12-2022.

39 https://www.snopes.com/ , last access date: 26-12-2022.

40 https://www.reutersagency.com/en/ , last access date: 26-12-2022.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Esma Aïmeur, Email: ac.laertnomu.ori@ruemia .

Sabrine Amri, Email: [email protected] .

Gilles Brassard, Email: ac.laertnomu.ori@drassarb .

  • Abdullah-All-Tanvir, Mahir EM, Akhter S, Huq MR (2019) Detecting fake news using machine learning and deep learning algorithms. In: 7th international conference on smart computing and communications (ICSCC), IEEE, pp 1–5 10.1109/ICSCC.2019.8843612
  • Abdullah-All-Tanvir, Mahir EM, Huda SMA, Barua S (2020) A hybrid approach for identifying authentic news using deep learning methods on popular Twitter threads. In: International conference on artificial intelligence and signal processing (AISP), IEEE, pp 1–6 10.1109/AISP48273.2020.9073583
  • Abu Arqoub O, Abdulateef Elega A, Efe Özad B, Dwikat H, Adedamola Oloyede F. Mapping the scholarship of fake news research: a systematic review. J Pract. 2022; 16 (1):56–86. doi: 10.1080/17512786.2020.1805791. [ CrossRef ] [ Google Scholar ]
  • Ahmed S, Hinkelmann K, Corradini F. Development of fake news model using machine learning through natural language processing. Int J Comput Inf Eng. 2020; 14 (12):454–460. [ Google Scholar ]
  • Aïmeur E, Brassard G, Rioux J. Data privacy: an end-user perspective. Int J Comput Netw Commun Secur. 2013; 1 (6):237–250. [ Google Scholar ]
  • Aïmeur E, Hage H, Amri S (2018) The scourge of online deception in social networks. In: 2018 international conference on computational science and computational intelligence (CSCI), IEEE, pp 1266–1271 10.1109/CSCI46756.2018.00244
  • Alemanno A. How to counter fake news? A taxonomy of anti-fake news approaches. Eur J Risk Regul. 2018; 9 (1):1–5. doi: 10.1017/err.2018.12. [ CrossRef ] [ Google Scholar ]
  • Allcott H, Gentzkow M. Social media and fake news in the 2016 election. J Econ Perspect. 2017; 31 (2):211–36. doi: 10.1257/jep.31.2.211. [ CrossRef ] [ Google Scholar ]
  • Allen J, Howland B, Mobius M, Rothschild D, Watts DJ. Evaluating the fake news problem at the scale of the information ecosystem. Sci Adv. 2020 doi: 10.1126/sciadv.aay3539. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Allington D, Duffy B, Wessely S, Dhavan N, Rubin J. Health-protective behaviour, social media usage and conspiracy belief during the Covid-19 public health emergency. Psychol Med. 2020 doi: 10.1017/S003329172000224X. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Alonso-Galbán P, Alemañy-Castilla C (2022) Curbing misinformation and disinformation in the Covid-19 era: a view from cuba. MEDICC Rev 22:45–46 10.37757/MR2020.V22.N2.12 [ PubMed ] [ CrossRef ]
  • Altay S, Hacquin AS, Mercier H. Why do so few people share fake news? It hurts their reputation. New Media Soc. 2022; 24 (6):1303–1324. doi: 10.1177/1461444820969893. [ CrossRef ] [ Google Scholar ]
  • Amri S, Sallami D, Aïmeur E (2022) Exmulf: an explainable multimodal content-based fake news detection system. In: International symposium on foundations and practice of security. Springer, Berlin, pp 177–187. 10.1109/IJCNN48605.2020.9206973
  • Andersen J, Søe SO. Communicative actions we live by: the problem with fact-checking, tagging or flagging fake news-the case of Facebook. Eur J Commun. 2020; 35 (2):126–139. doi: 10.1177/0267323119894489. [ CrossRef ] [ Google Scholar ]
  • Apuke OD, Omar B. Fake news and Covid-19: modelling the predictors of fake news sharing among social media users. Telematics Inform. 2021; 56 :101475. doi: 10.1016/j.tele.2020.101475. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Apuke OD, Omar B, Tunca EA, Gever CV. The effect of visual multimedia instructions against fake news spread: a quasi-experimental study with Nigerian students. J Librariansh Inf Sci. 2022 doi: 10.1177/09610006221096477. [ CrossRef ] [ Google Scholar ]
  • Aswani R, Ghrera S, Kar AK, Chandra S. Identifying buzz in social media: a hybrid approach using artificial bee colony and k-nearest neighbors for outlier detection. Soc Netw Anal Min. 2017; 7 (1):1–10. doi: 10.1007/s13278-017-0461-2. [ CrossRef ] [ Google Scholar ]
  • Avram M, Micallef N, Patil S, Menczer F (2020) Exposure to social engagement metrics increases vulnerability to misinformation. arXiv preprint arxiv:2005.04682 , 10.37016/mr-2020-033
  • Badawy A, Lerman K, Ferrara E (2019) Who falls for online political manipulation? In: Companion proceedings of the 2019 world wide web conference, pp 162–168 10.1145/3308560.3316494
  • Bahad P, Saxena P, Kamal R. Fake news detection using bi-directional LSTM-recurrent neural network. Procedia Comput Sci. 2019; 165 :74–82. doi: 10.1016/j.procs.2020.01.072. [ CrossRef ] [ Google Scholar ]
  • Bakdash J, Sample C, Rankin M, Kantarcioglu M, Holmes J, Kase S, Zaroukian E, Szymanski B (2018) The future of deception: machine-generated and manipulated images, video, and audio? In: 2018 international workshop on social sensing (SocialSens), IEEE, pp 2–2 10.1109/SocialSens.2018.00009
  • Balmas M. When fake news becomes real: combined exposure to multiple news sources and political attitudes of inefficacy, alienation, and cynicism. Commun Res. 2014; 41 (3):430–454. doi: 10.1177/0093650212453600. [ CrossRef ] [ Google Scholar ]
  • Baptista JP, Gradim A. Understanding fake news consumption: a review. Soc Sci. 2020 doi: 10.3390/socsci9100185. [ CrossRef ] [ Google Scholar ]
  • Baptista JP, Gradim A. A working definition of fake news. Encyclopedia. 2022; 2 (1):632–645. doi: 10.3390/encyclopedia2010043. [ CrossRef ] [ Google Scholar ]
  • Bastick Z. Would you notice if fake news changed your behavior? An experiment on the unconscious effects of disinformation. Comput Hum Behav. 2021; 116 :106633. doi: 10.1016/j.chb.2020.106633. [ CrossRef ] [ Google Scholar ]
  • Batailler C, Brannon SM, Teas PE, Gawronski B. A signal detection approach to understanding the identification of fake news. Perspect Psychol Sci. 2022; 17 (1):78–98. doi: 10.1177/1745691620986135. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bessi A, Ferrara E (2016) Social bots distort the 2016 US presidential election online discussion. First Monday 21(11-7). 10.5210/fm.v21i11.7090
  • Bhattacharjee A, Shu K, Gao M, Liu H (2020) Disinformation in the online information ecosystem: detection, mitigation and challenges. arXiv preprint arXiv:2010.09113
  • Bhuiyan MM, Zhang AX, Sehat CM, Mitra T. Investigating differences in crowdsourced news credibility assessment: raters, tasks, and expert criteria. Proc ACM Hum Comput Interact. 2020; 4 (CSCW2):1–26. doi: 10.1145/3415164. [ CrossRef ] [ Google Scholar ]
  • Bode L, Vraga EK. In related news, that was wrong: the correction of misinformation through related stories functionality in social media. J Commun. 2015; 65 (4):619–638. doi: 10.1111/jcom.12166. [ CrossRef ] [ Google Scholar ]
  • Bondielli A, Marcelloni F. A survey on fake news and rumour detection techniques. Inf Sci. 2019; 497 :38–55. doi: 10.1016/j.ins.2019.05.035. [ CrossRef ] [ Google Scholar ]
  • Bovet A, Makse HA. Influence of fake news in Twitter during the 2016 US presidential election. Nat Commun. 2019; 10 (1):1–14. doi: 10.1038/s41467-018-07761-2. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brashier NM, Pennycook G, Berinsky AJ, Rand DG. Timing matters when correcting fake news. Proc Natl Acad Sci. 2021 doi: 10.1073/pnas.2020043118. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brewer PR, Young DG, Morreale M. The impact of real news about “fake news”: intertextual processes and political satire. Int J Public Opin Res. 2013; 25 (3):323–343. doi: 10.1093/ijpor/edt015. [ CrossRef ] [ Google Scholar ]
  • Bringula RP, Catacutan-Bangit AE, Garcia MB, Gonzales JPS, Valderama AMC. “Who is gullible to political disinformation?” Predicting susceptibility of university students to fake news. J Inf Technol Polit. 2022; 19 (2):165–179. doi: 10.1080/19331681.2021.1945988. [ CrossRef ] [ Google Scholar ]
  • Buccafurri F, Lax G, Nicolazzo S, Nocera A (2017) Tweetchain: an alternative to blockchain for crowd-based applications. In: International conference on web engineering, Springer, Berlin, pp 386–393. 10.1007/978-3-319-60131-1_24
  • Burshtein S. The true story on fake news. Intell Prop J. 2017; 29 (3):397–446. [ Google Scholar ]
  • Cardaioli M, Cecconello S, Conti M, Pajola L, Turrin F (2020) Fake news spreaders profiling through behavioural analysis. In: CLEF (working notes)
  • Cardoso Durier da Silva F, Vieira R, Garcia AC (2019) Can machines learn to detect fake news? A survey focused on social media. In: Proceedings of the 52nd Hawaii international conference on system sciences. 10.24251/HICSS.2019.332
  • Carmi E, Yates SJ, Lockley E, Pawluczuk A (2020) Data citizenship: rethinking data literacy in the age of disinformation, misinformation, and malinformation. Intern Policy Rev 9(2):1–22 10.14763/2020.2.1481
  • Celliers M, Hattingh M (2020) A systematic review on fake news themes reported in literature. In: Conference on e-Business, e-Services and e-Society. Springer, Berlin, pp 223–234. 10.1007/978-3-030-45002-1_19
  • Chen Y, Li Q, Wang H (2018) Towards trusted social networks with blockchain technology. arXiv preprint arXiv:1801.02796
  • Cheng L, Guo R, Shu K, Liu H (2020) Towards causal understanding of fake news dissemination. arXiv preprint arXiv:2010.10580
  • Chiu MM, Oh YW. How fake news differs from personal lies. Am Behav Sci. 2021; 65 (2):243–258. doi: 10.1177/0002764220910243. [ CrossRef ] [ Google Scholar ]
  • Chung M, Kim N. When I learn the news is false: how fact-checking information stems the spread of fake news via third-person perception. Hum Commun Res. 2021; 47 (1):1–24. doi: 10.1093/hcr/hqaa010. [ CrossRef ] [ Google Scholar ]
  • Clarke J, Chen H, Du D, Hu YJ. Fake news, investor attention, and market reaction. Inf Syst Res. 2020 doi: 10.1287/isre.2019.0910. [ CrossRef ] [ Google Scholar ]
  • Clayton K, Blair S, Busam JA, Forstner S, Glance J, Green G, Kawata A, Kovvuri A, Martin J, Morgan E, et al. Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Polit Behav. 2020; 42 (4):1073–1095. doi: 10.1007/s11109-019-09533-0. [ CrossRef ] [ Google Scholar ]
  • Collins B, Hoang DT, Nguyen NT, Hwang D (2020) Fake news types and detection models on social media a state-of-the-art survey. In: Asian conference on intelligent information and database systems. Springer, Berlin, pp 562–573 10.1007/978-981-15-3380-8_49
  • Conroy NK, Rubin VL, Chen Y. Automatic deception detection: methods for finding fake news. Proc Assoc Inf Sci Technol. 2015; 52 (1):1–4. doi: 10.1002/pra2.2015.145052010082. [ CrossRef ] [ Google Scholar ]
  • Cooke NA. Posttruth, truthiness, and alternative facts: Information behavior and critical information consumption for a new age. Libr Q. 2017; 87 (3):211–221. doi: 10.1086/692298. [ CrossRef ] [ Google Scholar ]
  • Coscia M, Rossi L. Distortions of political bias in crowdsourced misinformation flagging. J R Soc Interface. 2020; 17 (167):20200020. doi: 10.1098/rsif.2020.0020. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dame Adjin-Tettey T. Combating fake news, disinformation, and misinformation: experimental evidence for media literacy education. Cogent Arts Human. 2022; 9 (1):2037229. doi: 10.1080/23311983.2022.2037229. [ CrossRef ] [ Google Scholar ]
  • Deepak S, Chitturi B. Deep neural approach to fake-news identification. Procedia Comput Sci. 2020; 167 :2236–2243. doi: 10.1016/j.procs.2020.03.276. [ CrossRef ] [ Google Scholar ]
  • de Cock Buning M (2018) A multi-dimensional approach to disinformation: report of the independent high level group on fake news and online disinformation. Publications Office of the European Union
  • Del Vicario M, Quattrociocchi W, Scala A, Zollo F. Polarization and fake news: early warning of potential misinformation targets. ACM Trans Web (TWEB) 2019; 13 (2):1–22. doi: 10.1145/3316809. [ CrossRef ] [ Google Scholar ]
  • Demuyakor J, Opata EM. Fake news on social media: predicting which media format influences fake news most on facebook. J Intell Commun. 2022 doi: 10.54963/jic.v2i1.56. [ CrossRef ] [ Google Scholar ]
  • Derakhshan H, Wardle C (2017) Information disorder: definitions. In: Understanding and addressing the disinformation ecosystem, pp 5–12
  • Desai AN, Ruidera D, Steinbrink JM, Granwehr B, Lee DH. Misinformation and disinformation: the potential disadvantages of social media in infectious disease and how to combat them. Clin Infect Dis. 2022; 74 (Supplement–3):e34–e39. doi: 10.1093/cid/ciac109. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Di Domenico G, Sit J, Ishizaka A, Nunan D. Fake news, social media and marketing: a systematic review. J Bus Res. 2021; 124 :329–341. doi: 10.1016/j.jbusres.2020.11.037. [ CrossRef ] [ Google Scholar ]
  • Dias N, Pennycook G, Rand DG. Emphasizing publishers does not effectively reduce susceptibility to misinformation on social media. Harv Kennedy School Misinform Rev. 2020 doi: 10.37016/mr-2020-001. [ CrossRef ] [ Google Scholar ]
  • DiCicco KW, Agarwal N (2020) Blockchain technology-based solutions to fight misinformation: a survey. In: Disinformation, misinformation, and fake news in social media. Springer, Berlin, pp 267–281, 10.1007/978-3-030-42699-6_14
  • Douglas KM, Uscinski JE, Sutton RM, Cichocka A, Nefes T, Ang CS, Deravi F. Understanding conspiracy theories. Polit Psychol. 2019; 40 :3–35. doi: 10.1111/pops.12568. [ CrossRef ] [ Google Scholar ]
  • Edgerly S, Mourão RR, Thorson E, Tham SM. When do audiences verify? How perceptions about message and source influence audience verification of news headlines. J Mass Commun Q. 2020; 97 (1):52–71. doi: 10.1177/1077699019864680. [ CrossRef ] [ Google Scholar ]
  • Egelhofer JL, Lecheler S. Fake news as a two-dimensional phenomenon: a framework and research agenda. Ann Int Commun Assoc. 2019; 43 (2):97–116. doi: 10.1080/23808985.2019.1602782. [ CrossRef ] [ Google Scholar ]
  • Elhadad MK, Li KF, Gebali F (2019) A novel approach for selecting hybrid features from online news textual metadata for fake news detection. In: International conference on p2p, parallel, grid, cloud and internet computing. Springer, Berlin, pp 914–925, 10.1007/978-3-030-33509-0_86
  • ERGA (2018) Fake news, and the information disorder. European Broadcasting Union (EBU)
  • ERGA (2021) Notions of disinformation and related concepts. European Regulators Group for Audiovisual Media Services (ERGA)
  • Escolà-Gascón Á. New techniques to measure lie detection using Covid-19 fake news and the Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2) Comput Hum Behav Rep. 2021; 3 :100049. doi: 10.1016/j.chbr.2020.100049. [ CrossRef ] [ Google Scholar ]
  • Fazio L. Pausing to consider why a headline is true or false can help reduce the sharing of false news. Harv Kennedy School Misinformation Rev. 2020 doi: 10.37016/mr-2020-009. [ CrossRef ] [ Google Scholar ]
  • Ferrara E, Varol O, Davis C, Menczer F, Flammini A. The rise of social bots. Commun ACM. 2016; 59 (7):96–104. doi: 10.1145/2818717. [ CrossRef ] [ Google Scholar ]
  • Flynn D, Nyhan B, Reifler J. The nature and origins of misperceptions: understanding false and unsupported beliefs about politics. Polit Psychol. 2017; 38 :127–150. doi: 10.1111/pops.12394. [ CrossRef ] [ Google Scholar ]
  • Fraga-Lamas P, Fernández-Caramés TM. Fake news, disinformation, and deepfakes: leveraging distributed ledger technologies and blockchain to combat digital deception and counterfeit reality. IT Prof. 2020; 22 (2):53–59. doi: 10.1109/MITP.2020.2977589. [ CrossRef ] [ Google Scholar ]
  • Freeman D, Waite F, Rosebrock L, Petit A, Causier C, East A, Jenner L, Teale AL, Carr L, Mulhall S, et al. Coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England. Psychol Med. 2020 doi: 10.1017/S0033291720001890. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Friggeri A, Adamic L, Eckles D, Cheng J (2014) Rumor cascades. In: Proceedings of the international AAAI conference on web and social media
  • García SA, García GG, Prieto MS, Moreno Guerrero AJ, Rodríguez Jiménez C. The impact of term fake news on the scientific community. Scientific performance and mapping in web of science. Soc Sci. 2020 doi: 10.3390/socsci9050073. [ CrossRef ] [ Google Scholar ]
  • Garrett RK, Bond RM. Conservatives’ susceptibility to political misperceptions. Sci Adv. 2021 doi: 10.1126/sciadv.abf1234. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Giachanou A, Ríssola EA, Ghanem B, Crestani F, Rosso P (2020) The role of personality and linguistic patterns in discriminating between fake news spreaders and fact checkers. In: International conference on applications of natural language to information systems. Springer, Berlin, pp 181–192 10.1007/978-3-030-51310-8_17
  • Golbeck J, Mauriello M, Auxier B, Bhanushali KH, Bonk C, Bouzaghrane MA, Buntain C, Chanduka R, Cheakalos P, Everett JB et al (2018) Fake news vs satire: a dataset and analysis. In: Proceedings of the 10th ACM conference on web science, pp 17–21, 10.1145/3201064.3201100
  • Goldani MH, Momtazi S, Safabakhsh R. Detecting fake news with capsule neural networks. Appl Soft Comput. 2021; 101 :106991. doi: 10.1016/j.asoc.2020.106991. [ CrossRef ] [ Google Scholar ]
  • Goldstein I, Yang L. Good disclosure, bad disclosure. J Financ Econ. 2019; 131 (1):118–138. doi: 10.1016/j.jfineco.2018.08.004. [ CrossRef ] [ Google Scholar ]
  • Grinberg N, Joseph K, Friedland L, Swire-Thompson B, Lazer D. Fake news on Twitter during the 2016 US presidential election. Science. 2019; 363 (6425):374–378. doi: 10.1126/science.aau2706. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Guadagno RE, Guttieri K (2021) Fake news and information warfare: an examination of the political and psychological processes from the digital sphere to the real world. In: Research anthology on fake news, political warfare, and combatting the spread of misinformation. IGI Global, pp 218–242 10.4018/978-1-7998-7291-7.ch013
  • Guess A, Nagler J, Tucker J. Less than you think: prevalence and predictors of fake news dissemination on Facebook. Sci Adv. 2019 doi: 10.1126/sciadv.aau4586. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Guo C, Cao J, Zhang X, Shu K, Yu M (2019) Exploiting emotions for fake news detection on social media. arXiv preprint arXiv:1903.01728
  • Guo B, Ding Y, Yao L, Liang Y, Yu Z. The future of false information detection on social media: new perspectives and trends. ACM Comput Surv (CSUR) 2020; 53 (4):1–36. doi: 10.1145/3393880. [ CrossRef ] [ Google Scholar ]
  • Gupta A, Li H, Farnoush A, Jiang W. Understanding patterns of covid infodemic: a systematic and pragmatic approach to curb fake news. J Bus Res. 2022; 140 :670–683. doi: 10.1016/j.jbusres.2021.11.032. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ha L, Andreu Perez L, Ray R. Mapping recent development in scholarship on fake news and misinformation, 2008 to 2017: disciplinary contribution, topics, and impact. Am Behav Sci. 2021; 65 (2):290–315. doi: 10.1177/0002764219869402. [ CrossRef ] [ Google Scholar ]
  • Habib A, Asghar MZ, Khan A, Habib A, Khan A. False information detection in online content and its role in decision making: a systematic literature review. Soc Netw Anal Min. 2019; 9 (1):1–20. doi: 10.1007/s13278-019-0595-5. [ CrossRef ] [ Google Scholar ]
  • Hage H, Aïmeur E, Guedidi A (2021) Understanding the landscape of online deception. In: Research anthology on fake news, political warfare, and combatting the spread of misinformation. IGI Global, pp 39–66. 10.4018/978-1-7998-2543-2.ch014
  • Hakak S, Alazab M, Khan S, Gadekallu TR, Maddikunta PKR, Khan WZ. An ensemble machine learning approach through effective feature extraction to classify fake news. Futur Gener Comput Syst. 2021; 117 :47–58. doi: 10.1016/j.future.2020.11.022. [ CrossRef ] [ Google Scholar ]
  • Hamdi T, Slimi H, Bounhas I, Slimani Y (2020) A hybrid approach for fake news detection in Twitter based on user features and graph embedding. In: International conference on distributed computing and internet technology. Springer, Berlin, pp 266–280. 10.1007/978-3-030-36987-3_17
  • Hameleers M. Separating truth from lies: comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the us and netherlands. Inf Commun Soc. 2022; 25 (1):110–126. doi: 10.1080/1369118X.2020.1764603. [ CrossRef ] [ Google Scholar ]
  • Hameleers M, Powell TE, Van Der Meer TG, Bos L. A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media. Polit Commun. 2020; 37 (2):281–301. doi: 10.1080/10584609.2019.1674979. [ CrossRef ] [ Google Scholar ]
  • Hameleers M, Brosius A, de Vreese CH. Whom to trust? media exposure patterns of citizens with perceptions of misinformation and disinformation related to the news media. Eur J Commun. 2022 doi: 10.1177/02673231211072667. [ CrossRef ] [ Google Scholar ]
  • Hartley K, Vu MK. Fighting fake news in the Covid-19 era: policy insights from an equilibrium model. Policy Sci. 2020; 53 (4):735–758. doi: 10.1007/s11077-020-09405-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hasan HR, Salah K. Combating deepfake videos using blockchain and smart contracts. IEEE Access. 2019; 7 :41596–41606. doi: 10.1109/ACCESS.2019.2905689. [ CrossRef ] [ Google Scholar ]
  • Hiriyannaiah S, Srinivas A, Shetty GK, Siddesh G, Srinivasa K (2020) A computationally intelligent agent for detecting fake news using generative adversarial networks. Hybrid computational intelligence: challenges and applications. pp 69–96 10.1016/B978-0-12-818699-2.00004-4
  • Hosseinimotlagh S, Papalexakis EE (2018) Unsupervised content-based identification of fake news articles with tensor decomposition ensembles. In: Proceedings of the workshop on misinformation and misbehavior mining on the web (MIS2)
  • Huckle S, White M. Fake news: a technological approach to proving the origins of content, using blockchains. Big Data. 2017; 5 (4):356–371. doi: 10.1089/big.2017.0071. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Huffaker JS, Kummerfeld JK, Lasecki WS, Ackerman MS (2020) Crowdsourced detection of emotionally manipulative language. In: Proceedings of the 2020 CHI conference on human factors in computing systems. pp 1–14 10.1145/3313831.3376375
  • Ireton C, Posetti J. Journalism, fake news & disinformation: handbook for journalism education and training. Paris: UNESCO Publishing; 2018. [ Google Scholar ]
  • Islam MR, Liu S, Wang X, Xu G. Deep learning for misinformation detection on online social networks: a survey and new perspectives. Soc Netw Anal Min. 2020; 10 (1):1–20. doi: 10.1007/s13278-020-00696-x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ismailov M, Tsikerdekis M, Zeadally S. Vulnerabilities to online social network identity deception detection research and recommendations for mitigation. Fut Internet. 2020; 12 (9):148. doi: 10.3390/fi12090148. [ CrossRef ] [ Google Scholar ]
  • Jakesch M, Koren M, Evtushenko A, Naaman M (2019) The role of source and expressive responding in political news evaluation. In: Computation and journalism symposium
  • Jamieson KH. Cyberwar: how Russian hackers and trolls helped elect a president: what we don’t, can’t, and do know. Oxford: Oxford University Press; 2020. [ Google Scholar ]
  • Jiang S, Chen X, Zhang L, Chen S, Liu H (2019) User-characteristic enhanced model for fake news detection in social media. In: CCF International conference on natural language processing and Chinese computing, Springer, Berlin, pp 634–646. 10.1007/978-3-030-32233-5_49
  • Jin Z, Cao J, Zhang Y, Luo J (2016) News verification by exploiting conflicting social viewpoints in microblogs. In: Proceedings of the AAAI conference on artificial intelligence
  • Jing TW, Murugesan RK (2018) A theoretical framework to build trust and prevent fake news in social media using blockchain. In: International conference of reliable information and communication technology. Springer, Berlin, pp 955–962, 10.1007/978-3-319-99007-1_88
  • Jones-Jang SM, Mortensen T, Liu J. Does media literacy help identification of fake news? Information literacy helps, but other literacies don’t. Am Behav Sci. 2021; 65 (2):371–388. doi: 10.1177/0002764219869406. [ CrossRef ] [ Google Scholar ]
  • Jungherr A, Schroeder R. Disinformation and the structural transformations of the public arena: addressing the actual challenges to democracy. Soc Media Soc. 2021 doi: 10.1177/2056305121988928. [ CrossRef ] [ Google Scholar ]
  • Kaliyar RK (2018) Fake news detection using a deep neural network. In: 2018 4th international conference on computing communication and automation (ICCCA), IEEE, pp 1–7 10.1109/CCAA.2018.8777343
  • Kaliyar RK, Goswami A, Narang P, Sinha S. Fndnet—a deep convolutional neural network for fake news detection. Cogn Syst Res. 2020; 61 :32–44. doi: 10.1016/j.cogsys.2019.12.005. [ CrossRef ] [ Google Scholar ]
  • Kapantai E, Christopoulou A, Berberidis C, Peristeras V. A systematic literature review on disinformation: toward a unified taxonomical framework. New Media Soc. 2021; 23 (5):1301–1326. doi: 10.1177/1461444820959296. [ CrossRef ] [ Google Scholar ]
  • Kapusta J, Benko L, Munk M (2019) Fake news identification based on sentiment and frequency analysis. In: International conference Europe middle east and North Africa information systems and technologies to support learning. Springer, Berlin, pp 400–409, 10.1007/978-3-030-36778-7_44
  • Kaur S, Kumar P, Kumaraguru P. Automating fake news detection system using multi-level voting model. Soft Comput. 2020; 24 (12):9049–9069. doi: 10.1007/s00500-019-04436-y. [ CrossRef ] [ Google Scholar ]
  • Khan SA, Alkawaz MH, Zangana HM (2019) The use and abuse of social media for spreading fake news. In: 2019 IEEE international conference on automatic control and intelligent systems (I2CACIS), IEEE, pp 145–148. 10.1109/I2CACIS.2019.8825029
  • Kim J, Tabibian B, Oh A, Schölkopf B, Gomez-Rodriguez M (2018) Leveraging the crowd to detect and reduce the spread of fake news and misinformation. In: Proceedings of the eleventh ACM international conference on web search and data mining, pp 324–332. 10.1145/3159652.3159734
  • Klein D, Wueller J. Fake news: a legal perspective. J Internet Law. 2017; 20 (10):5–13. [ Google Scholar ]
  • Kogan S, Moskowitz TJ, Niessner M (2019) Fake news: evidence from financial markets. Available at SSRN 3237763
  • Kuklinski JH, Quirk PJ, Jerit J, Schwieder D, Rich RF. Misinformation and the currency of democratic citizenship. J Polit. 2000; 62 (3):790–816. doi: 10.1111/0022-3816.00033. [ CrossRef ] [ Google Scholar ]
  • Kumar S, Shah N (2018) False information on web and social media: a survey. arXiv preprint arXiv:1804.08559
  • Kumar S, West R, Leskovec J (2016) Disinformation on the web: impact, characteristics, and detection of Wikipedia hoaxes. In: Proceedings of the 25th international conference on world wide web, pp 591–602. 10.1145/2872427.2883085
  • La Barbera D, Roitero K, Demartini G, Mizzaro S, Spina D (2020) Crowdsourcing truthfulness: the impact of judgment scale and assessor bias. In: European conference on information retrieval. Springer, Berlin, pp 207–214. 10.1007/978-3-030-45442-5_26
  • Lanius C, Weber R, MacKenzie WI. Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey. Soc Netw Anal Min. 2021; 11 (1):1–15. doi: 10.1007/s13278-021-00739-x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lazer DM, Baum MA, Benkler Y, Berinsky AJ, Greenhill KM, Menczer F, Metzger MJ, Nyhan B, Pennycook G, Rothschild D, et al. The science of fake news. Science. 2018; 359 (6380):1094–1096. doi: 10.1126/science.aao2998. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Le T, Shu K, Molina MD, Lee D, Sundar SS, Liu H (2019) 5 sources of clickbaits you should know! Using synthetic clickbaits to improve prediction and distinguish between bot-generated and human-written headlines. In: 2019 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM). IEEE, pp 33–40. 10.1145/3341161.3342875
  • Lewandowsky S (2020) Climate change, disinformation, and how to combat it. In: Annual Review of Public Health 42. 10.1146/annurev-publhealth-090419-102409 [ PubMed ]
  • Liu Y, Wu YF (2018) Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks. In: Proceedings of the AAAI conference on artificial intelligence, pp 354–361
  • Luo M, Hancock JT, Markowitz DM. Credibility perceptions and detection accuracy of fake news headlines on social media: effects of truth-bias and endorsement cues. Commun Res. 2022; 49 (2):171–195. doi: 10.1177/0093650220921321. [ CrossRef ] [ Google Scholar ]
  • Lutzke L, Drummond C, Slovic P, Árvai J. Priming critical thinking: simple interventions limit the influence of fake news about climate change on Facebook. Glob Environ Chang. 2019; 58 :101964. doi: 10.1016/j.gloenvcha.2019.101964. [ CrossRef ] [ Google Scholar ]
  • Maertens R, Anseel F, van der Linden S. Combatting climate change misinformation: evidence for longevity of inoculation and consensus messaging effects. J Environ Psychol. 2020; 70 :101455. doi: 10.1016/j.jenvp.2020.101455. [ CrossRef ] [ Google Scholar ]
  • Mahabub A. A robust technique of fake news detection using ensemble voting classifier and comparison with other classifiers. SN Applied Sciences. 2020; 2 (4):1–9. doi: 10.1007/s42452-020-2326-y. [ CrossRef ] [ Google Scholar ]
  • Mahbub S, Pardede E, Kayes A, Rahayu W. Controlling astroturfing on the internet: a survey on detection techniques and research challenges. Int J Web Grid Serv. 2019; 15 (2):139–158. doi: 10.1504/IJWGS.2019.099561. [ CrossRef ] [ Google Scholar ]
  • Marsden C, Meyer T, Brown I. Platform values and democratic elections: how can the law regulate digital disinformation? Comput Law Secur Rev. 2020; 36 :105373. doi: 10.1016/j.clsr.2019.105373. [ CrossRef ] [ Google Scholar ]
  • Masciari E, Moscato V, Picariello A, Sperlí G (2020) Detecting fake news by image analysis. In: Proceedings of the 24th symposium on international database engineering and applications, pp 1–5. 10.1145/3410566.3410599
  • Mazzeo V, Rapisarda A. Investigating fake and reliable news sources using complex networks analysis. Front Phys. 2022; 10 :886544. doi: 10.3389/fphy.2022.886544. [ CrossRef ] [ Google Scholar ]
  • McGrew S. Learning to evaluate: an intervention in civic online reasoning. Comput Educ. 2020; 145 :103711. doi: 10.1016/j.compedu.2019.103711. [ CrossRef ] [ Google Scholar ]
  • McGrew S, Breakstone J, Ortega T, Smith M, Wineburg S. Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory Res Soc Educ. 2018; 46 (2):165–193. doi: 10.1080/00933104.2017.1416320. [ CrossRef ] [ Google Scholar ]
  • Meel P, Vishwakarma DK. Fake news, rumor, information pollution in social media and web: a contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst Appl. 2020; 153 :112986. doi: 10.1016/j.eswa.2019.112986. [ CrossRef ] [ Google Scholar ]
  • Meese J, Frith J, Wilken R. Covid-19, 5G conspiracies and infrastructural futures. Media Int Aust. 2020; 177 (1):30–46. doi: 10.1177/1329878X20952165. [ CrossRef ] [ Google Scholar ]
  • Metzger MJ, Hartsell EH, Flanagin AJ. Cognitive dissonance or credibility? A comparison of two theoretical explanations for selective exposure to partisan news. Commun Res. 2020; 47 (1):3–28. doi: 10.1177/0093650215613136. [ CrossRef ] [ Google Scholar ]
  • Micallef N, He B, Kumar S, Ahamad M, Memon N (2020) The role of the crowd in countering misinformation: a case study of the Covid-19 infodemic. arXiv preprint arXiv:2011.05773
  • Mihailidis P, Viotty S. Spreadable spectacle in digital culture: civic expression, fake news, and the role of media literacies in “post-fact society. Am Behav Sci. 2017; 61 (4):441–454. doi: 10.1177/0002764217701217. [ CrossRef ] [ Google Scholar ]
  • Mishra R (2020) Fake news detection using higher-order user to user mutual-attention progression in propagation paths. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 652–653
  • Mishra S, Shukla P, Agarwal R. Analyzing machine learning enabled fake news detection techniques for diversified datasets. Wirel Commun Mobile Comput. 2022 doi: 10.1155/2022/1575365. [ CrossRef ] [ Google Scholar ]
  • Molina MD, Sundar SS, Le T, Lee D. “Fake news” is not simply false information: a concept explication and taxonomy of online content. Am Behav Sci. 2021; 65 (2):180–212. doi: 10.1177/0002764219878224. [ CrossRef ] [ Google Scholar ]
  • Moro C, Birt JR (2022) Review bombing is a dirty practice, but research shows games do benefit from online feedback. Conversation. https://research.bond.edu.au/en/publications/review-bombing-is-a-dirty-practice-but-research-shows-games-do-be
  • Mustafaraj E, Metaxas PT (2017) The fake news spreading plague: was it preventable? In: Proceedings of the 2017 ACM on web science conference, pp 235–239. 10.1145/3091478.3091523
  • Nagel TW. Measuring fake news acumen using a news media literacy instrument. J Media Liter Educ. 2022; 14 (1):29–42. doi: 10.23860/JMLE-2022-14-1-3. [ CrossRef ] [ Google Scholar ]
  • Nakov P (2020) Can we spot the “fake news” before it was even written? arXiv preprint arXiv:2008.04374
  • Nekmat E. Nudge effect of fact-check alerts: source influence and media skepticism on sharing of news misinformation in social media. Soc Media Soc. 2020 doi: 10.1177/2056305119897322. [ CrossRef ] [ Google Scholar ]
  • Nygren T, Brounéus F, Svensson G. Diversity and credibility in young people’s news feeds: a foundation for teaching and learning citizenship in a digital era. J Soc Sci Educ. 2019; 18 (2):87–109. doi: 10.4119/jsse-917. [ CrossRef ] [ Google Scholar ]
  • Nyhan B, Reifler J. Displacing misinformation about events: an experimental test of causal corrections. J Exp Polit Sci. 2015; 2 (1):81–93. doi: 10.1017/XPS.2014.22. [ CrossRef ] [ Google Scholar ]
  • Nyhan B, Porter E, Reifler J, Wood TJ. Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Polit Behav. 2020; 42 (3):939–960. doi: 10.1007/s11109-019-09528-x. [ CrossRef ] [ Google Scholar ]
  • Nyow NX, Chua HN (2019) Detecting fake news with tweets’ properties. In: 2019 IEEE conference on application, information and network security (AINS), IEEE, pp 24–29. 10.1109/AINS47559.2019.8968706
  • Ochoa IS, de Mello G, Silva LA, Gomes AJ, Fernandes AM, Leithardt VRQ (2019) Fakechain: a blockchain architecture to ensure trust in social media networks. In: International conference on the quality of information and communications technology. Springer, Berlin, pp 105–118. 10.1007/978-3-030-29238-6_8
  • Ozbay FA, Alatas B. Fake news detection within online social media using supervised artificial intelligence algorithms. Physica A. 2020; 540 :123174. doi: 10.1016/j.physa.2019.123174. [ CrossRef ] [ Google Scholar ]
  • Ozturk P, Li H, Sakamoto Y (2015) Combating rumor spread on social media: the effectiveness of refutation and warning. In: 2015 48th Hawaii international conference on system sciences, IEEE, pp 2406–2414. 10.1109/HICSS.2015.288
  • Parikh SB, Atrey PK (2018) Media-rich fake news detection: a survey. In: 2018 IEEE conference on multimedia information processing and retrieval (MIPR), IEEE, pp 436–441.10.1109/MIPR.2018.00093
  • Parrish K (2018) Deep learning & machine learning: what’s the difference? Online: https://parsers.me/deep-learning-machine-learning-whats-the-difference/ . Accessed 20 May 2020
  • Paschen J. Investigating the emotional appeal of fake news using artificial intelligence and human contributions. J Prod Brand Manag. 2019; 29 (2):223–233. doi: 10.1108/JPBM-12-2018-2179. [ CrossRef ] [ Google Scholar ]
  • Pathak A, Srihari RK (2019) Breaking! Presenting fake news corpus for automated fact checking. In: Proceedings of the 57th annual meeting of the association for computational linguistics: student research workshop, pp 357–362
  • Peng J, Detchon S, Choo KKR, Ashman H. Astroturfing detection in social media: a binary n-gram-based approach. Concurr Comput: Pract Exp. 2017; 29 (17):e4013. doi: 10.1002/cpe.4013. [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Rand DG. Fighting misinformation on social media using crowdsourced judgments of news source quality. Proc Natl Acad Sci. 2019; 116 (7):2521–2526. doi: 10.1073/pnas.1806781116. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Rand DG. Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. J Pers. 2020; 88 (2):185–200. doi: 10.1111/jopy.12476. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Bear A, Collins ET, Rand DG. The implied truth effect: attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Manag Sci. 2020; 66 (11):4944–4957. doi: 10.1287/mnsc.2019.3478. [ CrossRef ] [ Google Scholar ]
  • Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG. Fighting Covid-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol Sci. 2020; 31 (7):770–780. doi: 10.1177/0956797620939054. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Potthast M, Kiesel J, Reinartz K, Bevendorff J, Stein B (2017) A stylometric inquiry into hyperpartisan and fake news. arXiv preprint arXiv:1702.05638
  • Previti M, Rodriguez-Fernandez V, Camacho D, Carchiolo V, Malgeri M (2020) Fake news detection using time series and user features classification. In: International conference on the applications of evolutionary computation (Part of EvoStar), Springer, Berlin, pp 339–353. 10.1007/978-3-030-43722-0_22
  • Przybyla P (2020) Capturing the style of fake news. In: Proceedings of the AAAI conference on artificial intelligence, pp 490–497. 10.1609/aaai.v34i01.5386
  • Qayyum A, Qadir J, Janjua MU, Sher F. Using blockchain to rein in the new post-truth world and check the spread of fake news. IT Prof. 2019; 21 (4):16–24. doi: 10.1109/MITP.2019.2910503. [ CrossRef ] [ Google Scholar ]
  • Qian F, Gong C, Sharma K, Liu Y (2018) Neural user response generator: fake news detection with collective user intelligence. In: IJCAI, vol 18, pp 3834–3840. 10.24963/ijcai.2018/533
  • Raza S, Ding C. Fake news detection based on news content and social contexts: a transformer-based approach. Int J Data Sci Anal. 2022; 13 (4):335–362. doi: 10.1007/s41060-021-00302-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ricard J, Medeiros J (2020) Using misinformation as a political weapon: Covid-19 and Bolsonaro in Brazil. Harv Kennedy School misinformation Rev 1(3). https://misinforeview.hks.harvard.edu/article/using-misinformation-as-a-political-weapon-covid-19-and-bolsonaro-in-brazil/
  • Roozenbeek J, van der Linden S. Fake news game confers psychological resistance against online misinformation. Palgrave Commun. 2019; 5 (1):1–10. doi: 10.1057/s41599-019-0279-9. [ CrossRef ] [ Google Scholar ]
  • Roozenbeek J, van der Linden S, Nygren T. Prebunking interventions based on the psychological theory of “inoculation” can reduce susceptibility to misinformation across cultures. Harv Kennedy School Misinformation Rev. 2020 doi: 10.37016//mr-2020-008. [ CrossRef ] [ Google Scholar ]
  • Roozenbeek J, Schneider CR, Dryhurst S, Kerr J, Freeman AL, Recchia G, Van Der Bles AM, Van Der Linden S. Susceptibility to misinformation about Covid-19 around the world. R Soc Open Sci. 2020; 7 (10):201199. doi: 10.1098/rsos.201199. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rubin VL, Conroy N, Chen Y, Cornwell S (2016) Fake news or truth? Using satirical cues to detect potentially misleading news. In: Proceedings of the second workshop on computational approaches to deception detection, pp 7–17
  • Ruchansky N, Seo S, Liu Y (2017) Csi: a hybrid deep model for fake news detection. In: Proceedings of the 2017 ACM on conference on information and knowledge management, pp 797–806. 10.1145/3132847.3132877
  • Schuyler AJ (2019) Regulating facts: a procedural framework for identifying, excluding, and deterring the intentional or knowing proliferation of fake news online. Univ Ill JL Technol Pol’y, vol 2019, pp 211–240
  • Shae Z, Tsai J (2019) AI blockchain platform for trusting news. In: 2019 IEEE 39th international conference on distributed computing systems (ICDCS), IEEE, pp 1610–1619. 10.1109/ICDCS.2019.00160
  • Shang W, Liu M, Lin W, Jia M (2018) Tracing the source of news based on blockchain. In: 2018 IEEE/ACIS 17th international conference on computer and information science (ICIS), IEEE, pp 377–381. 10.1109/ICIS.2018.8466516
  • Shao C, Ciampaglia GL, Flammini A, Menczer F (2016) Hoaxy: A platform for tracking online misinformation. In: Proceedings of the 25th international conference companion on world wide web, pp 745–750. 10.1145/2872518.2890098
  • Shao C, Ciampaglia GL, Varol O, Yang KC, Flammini A, Menczer F. The spread of low-credibility content by social bots. Nat Commun. 2018; 9 (1):1–9. doi: 10.1038/s41467-018-06930-7. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shao C, Hui PM, Wang L, Jiang X, Flammini A, Menczer F, Ciampaglia GL. Anatomy of an online misinformation network. PLoS ONE. 2018; 13 (4):e0196087. doi: 10.1371/journal.pone.0196087. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sharma K, Qian F, Jiang H, Ruchansky N, Zhang M, Liu Y. Combating fake news: a survey on identification and mitigation techniques. ACM Trans Intell Syst Technol (TIST) 2019; 10 (3):1–42. doi: 10.1145/3305260. [ CrossRef ] [ Google Scholar ]
  • Sharma K, Seo S, Meng C, Rambhatla S, Liu Y (2020) Covid-19 on social media: analyzing misinformation in Twitter conversations. arXiv preprint arXiv:2003.12309
  • Shen C, Kasra M, Pan W, Bassett GA, Malloch Y, O’Brien JF. Fake images: the effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online. New Media Soc. 2019; 21 (2):438–463. doi: 10.1177/1461444818799526. [ CrossRef ] [ Google Scholar ]
  • Sherman IN, Redmiles EM, Stokes JW (2020) Designing indicators to combat fake media. arXiv preprint arXiv:2010.00544
  • Shi P, Zhang Z, Choo KKR. Detecting malicious social bots based on clickstream sequences. IEEE Access. 2019; 7 :28855–28862. doi: 10.1109/ACCESS.2019.2901864. [ CrossRef ] [ Google Scholar ]
  • Shu K, Sliva A, Wang S, Tang J, Liu H. Fake news detection on social media: a data mining perspective. ACM SIGKDD Explor Newsl. 2017; 19 (1):22–36. doi: 10.1145/3137597.3137600. [ CrossRef ] [ Google Scholar ]
  • Shu K, Mahudeswaran D, Wang S, Lee D, Liu H (2018a) Fakenewsnet: a data repository with news content, social context and spatialtemporal information for studying fake news on social media. arXiv preprint arXiv:1809.01286 , 10.1089/big.2020.0062 [ PubMed ]
  • Shu K, Wang S, Liu H (2018b) Understanding user profiles on social media for fake news detection. In: 2018 IEEE conference on multimedia information processing and retrieval (MIPR), IEEE, pp 430–435. 10.1109/MIPR.2018.00092
  • Shu K, Wang S, Liu H (2019a) Beyond news contents: the role of social context for fake news detection. In: Proceedings of the twelfth ACM international conference on web search and data mining, pp 312–320. 10.1145/3289600.3290994
  • Shu K, Zhou X, Wang S, Zafarani R, Liu H (2019b) The role of user profiles for fake news detection. In: Proceedings of the 2019 IEEE/ACM international conference on advances in social networks analysis and mining, pp 436–439. 10.1145/3341161.3342927
  • Shu K, Bhattacharjee A, Alatawi F, Nazer TH, Ding K, Karami M, Liu H. Combating disinformation in a social media age. Wiley Interdiscip Rev: Data Min Knowl Discov. 2020; 10 (6):e1385. doi: 10.1002/widm.1385. [ CrossRef ] [ Google Scholar ]
  • Shu K, Mahudeswaran D, Wang S, Liu H. Hierarchical propagation networks for fake news detection: investigation and exploitation. Proc Int AAAI Conf Web Soc Media AAAI Press. 2020; 14 :626–637. [ Google Scholar ]
  • Shu K, Wang S, Lee D, Liu H (2020c) Mining disinformation and fake news: concepts, methods, and recent advancements. In: Disinformation, misinformation, and fake news in social media. Springer, Berlin, pp 1–19 10.1007/978-3-030-42699-6_1
  • Shu K, Zheng G, Li Y, Mukherjee S, Awadallah AH, Ruston S, Liu H (2020d) Early detection of fake news with multi-source weak social supervision. In: ECML/PKDD (3), pp 650–666
  • Singh VK, Ghosh I, Sonagara D. Detecting fake news stories via multimodal analysis. J Am Soc Inf Sci. 2021; 72 (1):3–17. doi: 10.1002/asi.24359. [ CrossRef ] [ Google Scholar ]
  • Sintos S, Agarwal PK, Yang J (2019) Selecting data to clean for fact checking: minimizing uncertainty vs. maximizing surprise. Proc VLDB Endowm 12(13), 2408–2421. 10.14778/3358701.3358708 [ CrossRef ]
  • Snow J (2017) Can AI win the war against fake news? MIT Technology Review Online: https://www.technologyreview.com/s/609717/can-ai-win-the-war-against-fake-news/ . Accessed 3 Oct. 2020
  • Song G, Kim S, Hwang H, Lee K (2019) Blockchain-based notarization for social media. In: 2019 IEEE international conference on consumer clectronics (ICCE), IEEE, pp 1–2 10.1109/ICCE.2019.8661978
  • Starbird K, Arif A, Wilson T (2019) Disinformation as collaborative work: Surfacing the participatory nature of strategic information operations. In: Proceedings of the ACM on human–computer interaction, vol 3(CSCW), pp 1–26 10.1145/3359229
  • Sterret D, Malato D, Benz J, Kantor L, Tompson T, Rosenstiel T, Sonderman J, Loker K, Swanson E (2018) Who shared it? How Americans decide what news to trust on social media. Technical report, Norc Working Paper Series, WP-2018-001, pp 1–24
  • Sutton RM, Douglas KM. Conspiracy theories and the conspiracy mindset: implications for political ideology. Curr Opin Behav Sci. 2020; 34 :118–122. doi: 10.1016/j.cobeha.2020.02.015. [ CrossRef ] [ Google Scholar ]
  • Tandoc EC, Jr, Thomas RJ, Bishop L. What is (fake) news? Analyzing news values (and more) in fake stories. Media Commun. 2021; 9 (1):110–119. doi: 10.17645/mac.v9i1.3331. [ CrossRef ] [ Google Scholar ]
  • Tchakounté F, Faissal A, Atemkeng M, Ntyam A. A reliable weighting scheme for the aggregation of crowd intelligence to detect fake news. Information. 2020; 11 (6):319. doi: 10.3390/info11060319. [ CrossRef ] [ Google Scholar ]
  • Tchechmedjiev A, Fafalios P, Boland K, Gasquet M, Zloch M, Zapilko B, Dietze S, Todorov K (2019) Claimskg: a knowledge graph of fact-checked claims. In: International semantic web conference. Springer, Berlin, pp 309–324 10.1007/978-3-030-30796-7_20
  • Treen KMd, Williams HT, O’Neill SJ. Online misinformation about climate change. Wiley Interdiscip Rev Clim Change. 2020; 11 (5):e665. doi: 10.1002/wcc.665. [ CrossRef ] [ Google Scholar ]
  • Tsang SJ. Motivated fake news perception: the impact of news sources and policy support on audiences’ assessment of news fakeness. J Mass Commun Q. 2020 doi: 10.1177/1077699020952129. [ CrossRef ] [ Google Scholar ]
  • Tschiatschek S, Singla A, Gomez Rodriguez M, Merchant A, Krause A (2018) Fake news detection in social networks via crowd signals. In: Companion proceedings of the the web conference 2018, pp 517–524. 10.1145/3184558.3188722
  • Uppada SK, Manasa K, Vidhathri B, Harini R, Sivaselvan B. Novel approaches to fake news and fake account detection in OSNS: user social engagement and visual content centric model. Soc Netw Anal Min. 2022; 12 (1):1–19. doi: 10.1007/s13278-022-00878-9. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Van der Linden S, Roozenbeek J (2020) Psychological inoculation against fake news. In: Accepting, sharing, and correcting misinformation, the psychology of fake news. 10.4324/9780429295379-11
  • Van der Linden S, Panagopoulos C, Roozenbeek J. You are fake news: political bias in perceptions of fake news. Media Cult Soc. 2020; 42 (3):460–470. doi: 10.1177/0163443720906992. [ CrossRef ] [ Google Scholar ]
  • Valenzuela S, Muñiz C, Santos M. Social media and belief in misinformation in mexico: a case of maximal panic, minimal effects? Int J Press Polit. 2022 doi: 10.1177/19401612221088988. [ CrossRef ] [ Google Scholar ]
  • Vasu N, Ang B, Teo TA, Jayakumar S, Raizal M, Ahuja J (2018) Fake news: national security in the post-truth era. RSIS
  • Vereshchaka A, Cosimini S, Dong W (2020) Analyzing and distinguishing fake and real news to mitigate the problem of disinformation. In: Computational and mathematical organization theory, pp 1–15. 10.1007/s10588-020-09307-8
  • Verstraete M, Bambauer DE, Bambauer JR (2017) Identifying and countering fake news. Arizona legal studies discussion paper 73(17-15). 10.2139/ssrn.3007971
  • Vilmer J, Escorcia A, Guillaume M, Herrera J (2018) Information manipulation: a challenge for our democracies. In: Report by the Policy Planning Staff (CAPS) of the ministry for europe and foreign affairs, and the institute for strategic research (RSEM) of the Ministry for the Armed Forces
  • Vishwakarma DK, Varshney D, Yadav A. Detection and veracity analysis of fake news via scrapping and authenticating the web search. Cogn Syst Res. 2019; 58 :217–229. doi: 10.1016/j.cogsys.2019.07.004. [ CrossRef ] [ Google Scholar ]
  • Vlachos A, Riedel S (2014) Fact checking: task definition and dataset construction. In: Proceedings of the ACL 2014 workshop on language technologies and computational social science, pp 18–22. 10.3115/v1/W14-2508
  • von der Weth C, Abdul A, Fan S, Kankanhalli M (2020) Helping users tackle algorithmic threats on social media: a multimedia research agenda. In: Proceedings of the 28th ACM international conference on multimedia, pp 4425–4434. 10.1145/3394171.3414692
  • Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018; 359 (6380):1146–1151. doi: 10.1126/science.aap9559. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vraga EK, Bode L. Using expert sources to correct health misinformation in social media. Sci Commun. 2017; 39 (5):621–645. doi: 10.1177/1075547017731776. [ CrossRef ] [ Google Scholar ]
  • Waldman AE. The marketplace of fake news. Univ Pa J Const Law. 2017; 20 :845. [ Google Scholar ]
  • Wang WY (2017) “Liar, liar pants on fire”: a new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648
  • Wang L, Wang Y, de Melo G, Weikum G. Understanding archetypes of fake news via fine-grained classification. Soc Netw Anal Min. 2019; 9 (1):1–17. doi: 10.1007/s13278-019-0580-z. [ CrossRef ] [ Google Scholar ]
  • Wang Y, Han H, Ding Y, Wang X, Liao Q (2019b) Learning contextual features with multi-head self-attention for fake news detection. In: International conference on cognitive computing. Springer, Berlin, pp 132–142. 10.1007/978-3-030-23407-2_11
  • Wang Y, McKee M, Torbica A, Stuckler D. Systematic literature review on the spread of health-related misinformation on social media. Soc Sci Med. 2019; 240 :112552. doi: 10.1016/j.socscimed.2019.112552. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wang Y, Yang W, Ma F, Xu J, Zhong B, Deng Q, Gao J (2020) Weak supervision for fake news detection via reinforcement learning. In: Proceedings of the AAAI conference on artificial intelligence, pp 516–523. 10.1609/aaai.v34i01.5389
  • Wardle C (2017) Fake news. It’s complicated. Online: https://medium.com/1st-draft/fake-news-its-complicated-d0f773766c79 . Accessed 3 Oct 2020
  • Wardle C. The need for smarter definitions and practical, timely empirical research on information disorder. Digit J. 2018; 6 (8):951–963. doi: 10.1080/21670811.2018.1502047. [ CrossRef ] [ Google Scholar ]
  • Wardle C, Derakhshan H. Information disorder: toward an interdisciplinary framework for research and policy making. Council Eur Rep. 2017; 27 :1–107. [ Google Scholar ]
  • Weiss AP, Alwan A, Garcia EP, Garcia J. Surveying fake news: assessing university faculty’s fragmented definition of fake news and its impact on teaching critical thinking. Int J Educ Integr. 2020; 16 (1):1–30. doi: 10.1007/s40979-019-0049-x. [ CrossRef ] [ Google Scholar ]
  • Wu L, Liu H (2018) Tracing fake-news footprints: characterizing social media messages by how they propagate. In: Proceedings of the eleventh ACM international conference on web search and data mining, pp 637–645. 10.1145/3159652.3159677
  • Wu L, Rao Y (2020) Adaptive interaction fusion networks for fake news detection. arXiv preprint arXiv:2004.10009
  • Wu L, Morstatter F, Carley KM, Liu H. Misinformation in social media: definition, manipulation, and detection. ACM SIGKDD Explor Newsl. 2019; 21 (2):80–90. doi: 10.1145/3373464.3373475. [ CrossRef ] [ Google Scholar ]
  • Wu Y, Ngai EW, Wu P, Wu C. Fake news on the internet: a literature review, synthesis and directions for future research. Intern Res. 2022 doi: 10.1108/INTR-05-2021-0294. [ CrossRef ] [ Google Scholar ]
  • Xu K, Wang F, Wang H, Yang B. Detecting fake news over online social media via domain reputations and content understanding. Tsinghua Sci Technol. 2019; 25 (1):20–27. doi: 10.26599/TST.2018.9010139. [ CrossRef ] [ Google Scholar ]
  • Yang F, Pentyala SK, Mohseni S, Du M, Yuan H, Linder R, Ragan ED, Ji S, Hu X (2019a) Xfake: explainable fake news detector with visualizations. In: The world wide web conference, pp 3600–3604. 10.1145/3308558.3314119
  • Yang X, Li Y, Lyu S (2019b) Exposing deep fakes using inconsistent head poses. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, pp 8261–8265. 10.1109/ICASSP.2019.8683164
  • Yaqub W, Kakhidze O, Brockman ML, Memon N, Patil S (2020) Effects of credibility indicators on social media news sharing intent. In: Proceedings of the 2020 CHI conference on human factors in computing systems, pp 1–14. 10.1145/3313831.3376213
  • Yavary A, Sajedi H, Abadeh MS. Information verification in social networks based on user feedback and news agencies. Soc Netw Anal Min. 2020; 10 (1):1–8. doi: 10.1007/s13278-019-0616-4. [ CrossRef ] [ Google Scholar ]
  • Yazdi KM, Yazdi AM, Khodayi S, Hou J, Zhou W, Saedy S. Improving fake news detection using k-means and support vector machine approaches. Int J Electron Commun Eng. 2020; 14 (2):38–42. doi: 10.5281/zenodo.3669287. [ CrossRef ] [ Google Scholar ]
  • Zannettou S, Sirivianos M, Blackburn J, Kourtellis N. The web of false information: rumors, fake news, hoaxes, clickbait, and various other shenanigans. J Data Inf Qual (JDIQ) 2019; 11 (3):1–37. doi: 10.1145/3309699. [ CrossRef ] [ Google Scholar ]
  • Zellers R, Holtzman A, Rashkin H, Bisk Y, Farhadi A, Roesner F, Choi Y (2019) Defending against neural fake news. arXiv preprint arXiv:1905.12616
  • Zhang X, Ghorbani AA. An overview of online fake news: characterization, detection, and discussion. Inf Process Manag. 2020; 57 (2):102025. doi: 10.1016/j.ipm.2019.03.004. [ CrossRef ] [ Google Scholar ]
  • Zhang J, Dong B, Philip SY (2020) Fakedetector: effective fake news detection with deep diffusive neural network. In: 2020 IEEE 36th international conference on data engineering (ICDE), IEEE, pp 1826–1829. 10.1109/ICDE48307.2020.00180
  • Zhang Q, Lipani A, Liang S, Yilmaz E (2019a) Reply-aided detection of misinformation via Bayesian deep learning. In: The world wide web conference, pp 2333–2343. 10.1145/3308558.3313718
  • Zhang X, Karaman S, Chang SF (2019b) Detecting and simulating artifacts in GAN fake images. In: 2019 IEEE international workshop on information forensics and security (WIFS), IEEE, pp 1–6 10.1109/WIFS47025.2019.9035107
  • Zhou X, Zafarani R. A survey of fake news: fundamental theories, detection methods, and opportunities. ACM Comput Surv (CSUR) 2020; 53 (5):1–40. doi: 10.1145/3395046. [ CrossRef ] [ Google Scholar ]
  • Zubiaga A, Aker A, Bontcheva K, Liakata M, Procter R. Detection and resolution of rumours in social media: a survey. ACM Comput Surv (CSUR) 2018; 51 (2):1–36. doi: 10.1145/3161603. [ CrossRef ] [ Google Scholar ]

Things you buy through our links may earn Vox Media a commission.

The Media Did Not Make Up Trump’s Russia Scandal

Liberal bias mostly exists outside politics coverage..

NPR reporter Uri Berliner wrote an essay for The Free Press arguing that the network has lost chunks of its audience by growing too dogmatically progressive. Some of the evidence supports his claim. Unfortunately, he undermines his case by leading with an example that in no way vindicates the thesis, and actually undermines it: coverage of the Trump-Russia scandal .

Berliner presents the story as a nothingburger that NPR breathlessly hyped and then ignored when it turned out to exonerate the president:

“Persistent rumors that the Trump campaign colluded with Russia over the election became the catnip that drove reporting. At NPR, we hitched our wagon to Trump’s most visible antagonist, Representative Adam Schiff.  Schiff, who was the top Democrat on the House Intelligence Committee, became NPR’s guiding hand, its ever-present muse. By my count, NPR hosts interviewed Schiff 25 times about Trump and Russia. During many of those conversations, Schiff alluded to purported evidence of collusion. The Schiff talking points became the drumbeat of NPR news reports. But when the Mueller report found no credible evidence of collusion, NPR’s coverage was notably sparse. Russiagate quietly faded from our programming.”

Even though Republicans have repeated this ad nauseam to the point where The Free Press would blithely state it as fact, it is simply not true that the Mueller report “found no credible evidence of collusion.”

First, establishing “collusion” was explicitly not the objective of the Mueller investigation. Mueller saw his job as identifying criminal behavior. Collusion is not a crime. The Mueller report stated clearly that it was not attempting to prove whether or not Trump colluded with Russia:

In evaluating whether evidence about collective action of multiple individuals constituted a crime, we applied the framework of conspiracy law, not the concept of “collusion.” In so doing, the Office recognized that the word “collud[e]” was used in communications with the Acting Attorney General confirming certain aspects of the investigation’s scope and that the term has frequently been invoked in public reporting about the investigation. But collusion is not a specific offense or theory of liability found in the United States Code, nor is it a term of art in federal criminal law. For those reasons, the Office’s focus in analyzing questions of joint criminal liability was on conspiracy as defined in federal law.

Nonetheless, Mueller found extensive evidence of collusion between the Trump campaign and Russia. The evidence was summarized in a report by Just Security . It uncovered multiple secret meetings and communications between the two, including, but not limited to. Trump campaign officials met with Russian agents in Trump Tower and were receptive to the offer of campaign assistance; Russian agents shared with Trump their plan to leak embarrassing emails; Trump’s campaign manager shared polling data with a figure linked to Russian intelligence; Trump appeared to have advance knowledge of the timing of the release of stolen Russian emails; and the campaign and Russia coordinated a response to Obama administration sanctions punishing Russia for its efforts on Trump’s behalf.

But because collusion is not a crime, Mueller refrained from stating an opinion as to whether this extensive pattern of furtive meetings in pursuit of a shared objective constituted “collusion.”

There was an investigation into whether Trump’s campaign colluded with Russia. That investigation was conducted by the bipartisan  Senate Intelligence Committee . And that report found even more evidence of collusion, including multiple links between Russian intelligence and the Russian figures interfacing with Trump’s campaign. The Senate identified Konstantin Kilimnik, the business partner of Trump campaign manager Paul Manafort, as a Russian intelligence agent. And it found two pieces of evidence that “raise the possibility of Manafort’s potential connection to the hack-and-leak operations” — the most direct kind of collusion — that it redacted for national-security reasons.

The Senate Intelligence report came out more than a year after the Mueller report and received a fraction of the media attention devoted to Mueller. But that disparity is not, as Berliner frames it, evidence of anti-Trump bias. It’s evidence of the opposite. The news media allowed Trump’s “no collusion” to misleadingly frame Mueller’s investigation and then buried the report that did investigate collusion.

In my experience, if you tell a conservative that there’s a damning story about a Republican the mainstream media ignored, they’ll look at you like you said there are live aliens in a government building. They’re not wrong that the mainstream media has a great deal of liberal bias.

In my view, though, that bias exerts the strongest impact on cultural coverage and on siloed social liberal beats, especially ones related to identity politics, that often simply treat progressive activists as authority figures and convey their perspective uncritically. The New York Times became the target of left-wing protests because it covered the youth gender-medicine story with traditional journalistic methods rather than simply regurgitating activist talking points, as many other publications have done. The Times continues to stand out from other American media institutions in its idiosyncratic decision to cover divisions within the youth gender medical field. The Times wrote about a major new U.K. report finding casting doubt on medicalization of gender-questioning youth, but most American news outlets have covered the story in the same way Fox News covers stories that embarrass Republicans: not at all.

Yet that bias on social liberalism and culture does not equate to coverage of hard political news, which still retains the traditional features of reporting the claims of both parties. Both the mainstream media and its critics would benefit from thinking more carefully about the very different ways parts of their organizations have treated norms of objectivity.

Berliner thinks the Russia story is evidence the news media is hopelessly biased to the left. If anything, his misunderstanding of the story shows the bias is not as bad as he thinks.

  • remove interruptions
  • the national interest

Most Viewed Stories

  • ‘Sleepy Don’: Trump Falls Asleep During Hush-Money Trial
  • Marjorie Taylor Greene Attempts Trump Legal Defense, Fails Very Badly
  • Andrew Huberman’s Mechanisms of Control  
  • Trump’s Gettysburg Address Featured a Pirate Impression
  • What Happened in the Trump Trial Today: Some Sleep, No Jury
  • Salman Rushdie Parties Again
  • A Handgun for Christmas

Editor’s Picks

essay on bias in media

Most Popular

  • ‘Sleepy Don’: Trump Falls Asleep During Hush-Money Trial By Margaret Hartmann
  • Marjorie Taylor Greene Attempts Trump Legal Defense, Fails Very Badly By Jonathan Chait
  • Andrew Huberman’s Mechanisms of Control   By Kerry Howley
  • Trump’s Gettysburg Address Featured a Pirate Impression By Margaret Hartmann
  • What Happened in the Trump Trial Today: Some Sleep, No Jury By Intelligencer Staff
  • Salman Rushdie Parties Again By Simon van Zuylen-Wood

essay on bias in media

What is your email?

This email will be used to sign into all New York sites. By submitting your email, you agree to our Terms and Privacy Policy and to receive email correspondence from us.

Sign In To Continue Reading

Create your free account.

Password must be at least 8 characters and contain:

  • Lower case letters (a-z)
  • Upper case letters (A-Z)
  • Numbers (0-9)
  • Special Characters (!@#$%^&*)

As part of your account, you’ll receive occasional updates and offers from New York , which you can opt out of anytime.

  • Skip to main content
  • Keyboard shortcuts for audio player

NPR defends its journalism after senior editor says it has lost the public's trust

David Folkenflik 2018 square

David Folkenflik

essay on bias in media

NPR is defending its journalism and integrity after a senior editor wrote an essay accusing it of losing the public's trust. Saul Loeb/AFP via Getty Images hide caption

NPR is defending its journalism and integrity after a senior editor wrote an essay accusing it of losing the public's trust.

NPR's top news executive defended its journalism and its commitment to reflecting a diverse array of views on Tuesday after a senior NPR editor wrote a broad critique of how the network has covered some of the most important stories of the age.

"An open-minded spirit no longer exists within NPR, and now, predictably, we don't have an audience that reflects America," writes Uri Berliner.

A strategic emphasis on diversity and inclusion on the basis of race, ethnicity and sexual orientation, promoted by NPR's former CEO, John Lansing, has fed "the absence of viewpoint diversity," Berliner writes.

NPR's chief news executive, Edith Chapin, wrote in a memo to staff Tuesday afternoon that she and the news leadership team strongly reject Berliner's assessment.

"We're proud to stand behind the exceptional work that our desks and shows do to cover a wide range of challenging stories," she wrote. "We believe that inclusion — among our staff, with our sourcing, and in our overall coverage — is critical to telling the nuanced stories of this country and our world."

NPR names tech executive Katherine Maher to lead in turbulent era

NPR names tech executive Katherine Maher to lead in turbulent era

She added, "None of our work is above scrutiny or critique. We must have vigorous discussions in the newsroom about how we serve the public as a whole."

A spokesperson for NPR said Chapin, who also serves as the network's chief content officer, would have no further comment.

Praised by NPR's critics

Berliner is a senior editor on NPR's Business Desk. (Disclosure: I, too, am part of the Business Desk, and Berliner has edited many of my past stories. He did not see any version of this article or participate in its preparation before it was posted publicly.)

Berliner's essay , titled "I've Been at NPR for 25 years. Here's How We Lost America's Trust," was published by The Free Press, a website that has welcomed journalists who have concluded that mainstream news outlets have become reflexively liberal.

Berliner writes that as a Subaru-driving, Sarah Lawrence College graduate who "was raised by a lesbian peace activist mother ," he fits the mold of a loyal NPR fan.

Yet Berliner says NPR's news coverage has fallen short on some of the most controversial stories of recent years, from the question of whether former President Donald Trump colluded with Russia in the 2016 election, to the origins of the virus that causes COVID-19, to the significance and provenance of emails leaked from a laptop owned by Hunter Biden weeks before the 2020 election. In addition, he blasted NPR's coverage of the Israel-Hamas conflict.

On each of these stories, Berliner asserts, NPR has suffered from groupthink due to too little diversity of viewpoints in the newsroom.

The essay ricocheted Tuesday around conservative media , with some labeling Berliner a whistleblower . Others picked it up on social media, including Elon Musk, who has lambasted NPR for leaving his social media site, X. (Musk emailed another NPR reporter a link to Berliner's article with a gibe that the reporter was a "quisling" — a World War II reference to someone who collaborates with the enemy.)

When asked for further comment late Tuesday, Berliner declined, saying the essay spoke for itself.

The arguments he raises — and counters — have percolated across U.S. newsrooms in recent years. The #MeToo sexual harassment scandals of 2016 and 2017 forced newsrooms to listen to and heed more junior colleagues. The social justice movement prompted by the killing of George Floyd in 2020 inspired a reckoning in many places. Newsroom leaders often appeared to stand on shaky ground.

Leaders at many newsrooms, including top editors at The New York Times and the Los Angeles Times , lost their jobs. Legendary Washington Post Executive Editor Martin Baron wrote in his memoir that he feared his bonds with the staff were "frayed beyond repair," especially over the degree of self-expression his journalists expected to exert on social media, before he decided to step down in early 2021.

Since then, Baron and others — including leaders of some of these newsrooms — have suggested that the pendulum has swung too far.

Legendary editor Marty Baron describes his 'Collision of Power' with Trump and Bezos

Author Interviews

Legendary editor marty baron describes his 'collision of power' with trump and bezos.

New York Times publisher A.G. Sulzberger warned last year against journalists embracing a stance of what he calls "one-side-ism": "where journalists are demonstrating that they're on the side of the righteous."

"I really think that that can create blind spots and echo chambers," he said.

Internal arguments at The Times over the strength of its reporting on accusations that Hamas engaged in sexual assaults as part of a strategy for its Oct. 7 attack on Israel erupted publicly . The paper conducted an investigation to determine the source of a leak over a planned episode of the paper's podcast The Daily on the subject, which months later has not been released. The newsroom guild accused the paper of "targeted interrogation" of journalists of Middle Eastern descent.

Heated pushback in NPR's newsroom

Given Berliner's account of private conversations, several NPR journalists question whether they can now trust him with unguarded assessments about stories in real time. Others express frustration that he had not sought out comment in advance of publication. Berliner acknowledged to me that for this story, he did not seek NPR's approval to publish the piece, nor did he give the network advance notice.

Some of Berliner's NPR colleagues are responding heatedly. Fernando Alfonso, a senior supervising editor for digital news, wrote that he wholeheartedly rejected Berliner's critique of the coverage of the Israel-Hamas conflict, for which NPR's journalists, like their peers, periodically put themselves at risk.

Alfonso also took issue with Berliner's concern over the focus on diversity at NPR.

"As a person of color who has often worked in newsrooms with little to no people who look like me, the efforts NPR has made to diversify its workforce and its sources are unique and appropriate given the news industry's long-standing lack of diversity," Alfonso says. "These efforts should be celebrated and not denigrated as Uri has done."

After this story was first published, Berliner contested Alfonso's characterization, saying his criticism of NPR is about the lack of diversity of viewpoints, not its diversity itself.

"I never criticized NPR's priority of achieving a more diverse workforce in terms of race, ethnicity and sexual orientation. I have not 'denigrated' NPR's newsroom diversity goals," Berliner said. "That's wrong."

Questions of diversity

Under former CEO John Lansing, NPR made increasing diversity, both of its staff and its audience, its "North Star" mission. Berliner says in the essay that NPR failed to consider broader diversity of viewpoint, noting, "In D.C., where NPR is headquartered and many of us live, I found 87 registered Democrats working in editorial positions and zero Republicans."

Berliner cited audience estimates that suggested a concurrent falloff in listening by Republicans. (The number of people listening to NPR broadcasts and terrestrial radio broadly has declined since the start of the pandemic.)

Former NPR vice president for news and ombudsman Jeffrey Dvorkin tweeted , "I know Uri. He's not wrong."

Others questioned Berliner's logic. "This probably gets causality somewhat backward," tweeted Semafor Washington editor Jordan Weissmann . "I'd guess that a lot of NPR listeners who voted for [Mitt] Romney have changed how they identify politically."

Similarly, Nieman Lab founder Joshua Benton suggested the rise of Trump alienated many NPR-appreciating Republicans from the GOP.

In recent years, NPR has greatly enhanced the percentage of people of color in its workforce and its executive ranks. Four out of 10 staffers are people of color; nearly half of NPR's leadership team identifies as Black, Asian or Latino.

"The philosophy is: Do you want to serve all of America and make sure it sounds like all of America, or not?" Lansing, who stepped down last month, says in response to Berliner's piece. "I'd welcome the argument against that."

"On radio, we were really lagging in our representation of an audience that makes us look like what America looks like today," Lansing says. The U.S. looks and sounds a lot different than it did in 1971, when NPR's first show was broadcast, Lansing says.

A network spokesperson says new NPR CEO Katherine Maher supports Chapin and her response to Berliner's critique.

The spokesperson says that Maher "believes that it's a healthy thing for a public service newsroom to engage in rigorous consideration of the needs of our audiences, including where we serve our mission well and where we can serve it better."

Disclosure: This story was reported and written by NPR Media Correspondent David Folkenflik and edited by Deputy Business Editor Emily Kopp and Managing Editor Gerry Holmes. Under NPR's protocol for reporting on itself, no NPR corporate official or news executive reviewed this story before it was posted publicly.

Editorials | Editorial: Liberal bias at NPR, old-school…

Share this:.

  • Click to share on Facebook (Opens in new window)
  • Click to share on X (Opens in new window)
  • Click to print (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
  • Clarence Page

Editorials | Editorial: Liberal bias at NPR, old-school journalism and the reluctance to admit a mistake

The National Public Radio headquarters in Washington on April 20, 2020. (Ting Shen/The New York Times)

Uri Berliner, a journalist of a certain age, has been feeling some heartburn over what has been transpiring at his longtime employer, National Public Radio.

In a nuanced and thoughtful essay on the website The Free Press, founded by Bari Weiss and Nellie Bowles, Berliner detailed what he has seen as egregious liberal bias at his employer. Among Berliner’s most notable charges: the network’s refusal to admit that its oft-told story of the Trump presidential campaign colluding with Russia was a canard, even after Robert Mueller found no evidence of collusion; NPR’s determination to keep ignoring the clearly relevant Hunter Biden laptop story, even in the face of evidence that it contained politically relevant details of Biden family business dealings; and its stubborn refusal to take the “lab leak” theory of COVID origin seriously, clinging to the idea it was a right-wing conspiracy theory, even as more and more evidence was pointing in that direction.

In essence, looking back at the last presidential campaign, Berliner argued that the station had unethically refused to run anything that it thought might help Trump. And, therefore, NPR had thus changed from a neutral news outfit, following the facts, to a cabal of advocates for one side of the political divide.

We suspect few of our readers would be surprised to hear evidence that NPR has a liberal bias, both nationally and within its local affiliates. And we’ll point out that in all three of the cases cited above, the issue perhaps wasn’t so much political bias so much as a reluctance to admit mistakes had been made in past coverage or follow up sufficiently when there’s new evidence. We journalists hate to fess up as a breed; only the best of us do so in a timely and complete way. In all three cases, those same charges also have been credibly leveled against The New York Times and others. Even many progressive journalists in many newsrooms quietly acknowledge those errors. The pendulum swung too far, and it’s swung back only a little.

But Berliner, whose point of view is shared among veterans of many newsrooms, was actually defending a particular brand of journalistic thinking: “It’s true NPR has always had a liberal bent, but during most of my tenure here, an open-minded, curious culture prevailed,” he wrote. “We were nerdy, but not knee-jerk, activist, or scolding. In recent years, however, that has changed.”

He’s right, of course. So what happened? Part of the answer is the chicken-and-egg segmentation of the audience: the reason all the late-night comedy hosts are progressives is that like-minded viewers are watching TV at that hour. The Times has mostly urban liberals as its subscribers, so it fiscally behooves it to super-serve them.

Part of the answer has to be the rise of critical race theory and the George Floyd-induced reckoning, wherein old-line centrism came to be seen by many on the left as unhelpful at best or a continuance of historical racism at worst. And a big part of the blame goes to Donald J. Trump, who convinced plenty of young journalists he was such a threat to democracy that refusing to write a story which might help him win the presidency was a patriotic act. Of course, that only backfired, as we all now can see. But plenty of smart, leftist journalists still openly decry “bothsidesism,” once a defining ethos of journalists in a free society.

And then, of course, there is the media mogul Rupert Murdoch, whose outlets became so conservative that the old centrists worried they were falling into the same trap that snared Democrats at the 1991 Anita Hill/Clarence Thomas hearings: Hill faced Republican prosecutors, cautiously neutral Democrats and had no defense counsel. It was crushingly unfair. Lots of newspeople, especially women, don’t want to see that happen again on their watch. Not with Trump around.

So what to do? The idea that we’re going to see a sudden resurgence of open-minded thinking and ideological de-emphasis is probably pie in the sky, as helpful as that would be for those of us who dislike America’s political extremes. Take, for example, CNN reporter Oliver Darcy’s coverage of a piece he clearly hated : “Regardless of the questionable merits of Berliner’s sweeping conclusions,” Darcy wrote, ironically confirming the premise of the article he was critiquing, “his piece has been nothing short of a massive gift to the right, which has made vilifying the news media its top priority in recent years.”

If that’s CNN’s response to a thoughtful critique, that’s a problem. As a journalist, Berliner shouldn’t be worrying about what a political movement could, or even will, do with his piece: his job is to state the evidence and make his point. Of all organizations, CNN should see that. We certainly do.

We commend Berliner’s courage in taking a stand that probably alienated him from many of his colleagues. We think it has good lessons for all news organizations, and it’s equally applicable to those on the right. Journalism has become a lot like nuclear proliferation and deterrence; someone has to have the courage to disarm. For the sake of the country.

There’s a business case to be made here too. The best news outlets, columnists and editorializers have the capacity to surprise readers and viewers, and don’t hesitate to do so. Predictability is a turnoff for readers and listeners. If you know what someone is going to say about something in advance, you’re more inclined not to bother finding out.

Journalists are doing a lot of fretting these days about AI and a possible dystopian future in which that technology eliminates their jobs. One way to ward off that threat is to surprise people. It’s easer to replicate a publication and its writers if they’re beating the same drum all the time.

Still, we’re optimists when it comes to our profession. We see some wise newsroom heads, not all of them old, who realize that foregrounding ideology or political mission doesn’t help report the news or summon the courage to stand up to journalists who are activists in disguise. Plenty of courageous newsroom stands are taken, often with little notice, as facts lead in inconvenient directions, as they so often do.

Readers most often write letters to the editor when they are aggrieved by something. Here’s a suggestion: We think you can help journalism and the country when you write one to praise a courageous journalist who has admitted to a past mistake or wrong take, even if that confession undermines a favored cause.

We doubt AI will do that.

More in Editorials

The media barnstorming by Andrea Kersten, head of the Civilian Office of Police Accountability, is giving the impression of prejudging the cops' behavior.

Editorials | Editorial: COPA leader needs to better build public confidence in Dexter Reed police investigation

Donald Trump and House Republicans in thrall to him are standing in the way of needed extension of FISA, the law allowing for warrantless surveillance of foreigners abroad.

Editorials | Editorial: Kudos to the House for ignoring Trump’s nonsense. Let’s finish the job and reauthorize FISA.

What actually would vault O'Hare over competing airports is, in fact, more service, not a terminal expansion. Airlines don't schedule flights based on fancy fountains.

Editorials | Editorial: O’Hare’s boosters are finally realizing that the airlines are, in fact, the clients

There are only a handful of moments in the TV age, many traumatic, where Americans alive at the time can say where they first heard them. O.J. Simpson's bizarre Bronco chase is one of them.

Editorials | Editorial: O.J. Simpson’s fraught legacy is one on its own

Trending nationally.

  • Stranger punches girl at NYC Grand Central Station days after release in similar attack
  • Florida GOP operative admits role in ‘ghost candidate’ scheme
  • FBI agents board cargo ship that caused Francis Scott Key Bridge collapse
  • Lawsuits filed by Dexter Reed paint picture of a troubled man trying to recover from being shot
  • They told a Fremont woman they’d just won the lottery and needed her help. A few minutes later she was $20,000 poorer
  • International edition
  • Australia edition
  • Europe edition

The blocky, modernist headquarters of NPR in Washington DC.

Senior NPR editor claims public broadcaster lacks ‘viewpoint diversity’

Uri Berliner said in a letter that Americans no longer trusted broadcaster because of its ‘distilled worldview’ and liberal bent

A debate about media bias has broken out at National Public Radio after a longtime employee published a scathing letter accusing the broadcaster of a “distilled worldview of a very small segment of the US population” and “telling people how to think”, prompting an impassioned defense of the station from its editor-in-chief.

In the letter published on Free Press , NPR’s senior business editor Uri Berliner claimed Americans no longer trust NPR – which is partly publicly funded – because of its lack of “viewpoint diversity” and its embrace of diversity, equity and inclusion (DEI) initiatives.

Berliner wrote that “an open-minded spirit no longer exists within NPR , and now, predictably, we don’t have an audience that reflects America”. He acknowledged that NPR’s audience had always tilted left, but was now no longer able to make any claim to ideological neutrality.

In the piece on Free Press, a site run by Bari Weiss, a former opinion editor at the New York Times, Berliner noted that in 2011 the public broadcaster’s audience identified as 26% conservative, 23% as middle of the road and 37% liberal. Last year it identified as 11% very or somewhat conservative, 21% as middle of the road, and 67% very or somewhat liberal.

“We weren’t just losing conservatives; we were also losing moderates and traditional liberals,” Berliner wrote, and described a new listener stereotype: “EV-driving, Wordle-playing, tote bag–carrying coastal elite.”

This would not be a problem, he said, if the radio broadcaster was an “openly polemical news outlet serving a niche audience”, but for a public broadcaster, “which purports to consider all things, it’s devastating both for its journalism and its business model”.

“I’ve become a visible wrong-thinker at a place I love,” he wrote.

The letter, which mirrors a recent critique of the New York Times by former editor James Bennet in the Economist and aspects of a recent lecture by the paper’s publisher, AG Sulzberger , has provoked a fierce backlash from NPR editorial staff.

NPR’s editor-in-chief, Edith Chapin, wrote in a memo to staff that she “strongly disagreed” with Berliner’s assessment, stood behind the outlet’s “exceptional work” and said she believed that “inclusion – among our staff, with our sourcing, and in our overall coverage – is critical to telling the nuanced stories of this country and our world”.

Chapin added that the radio broadcasters’ work was not above scrutiny or critique. “We must have vigorous discussions in the newsroom about how we serve the public as a whole, fostering a culture of conversation that breaks down the silos that we sometimes end up retreating to,” she said.

Chapin was appointed editor last year after a period of turbulence at NPR over what it acknowledged were clashes between its news and programming divisions over “priorities, resources and need to innovate”.

“We all aim every day to serve our audience with information and moments of joy that are useful and relevant,” Chapin said at the time.

after newsletter promotion

Berliner identified the station’s coverage of the Covid-19 lab leak theory, Hunter Biden’s laptop and allegations that Donald Trump colluded with Russia in the 2016 election as all examples of how “politics were blotting out the curiosity and independence that ought to have been driving our work”.

He also identified DEI and use of language advanced by affiliated groups as evidence that “people at every level of NPR have comfortably coalesced around the progressive worldview”. Berliner said that when he brought up his survey of newsroom political voter registration at a 2021 all-staff meeting, showing there were no Republicans, he claimed he was met with “profound indifference”.

“The messages were of the ‘Oh wow, that’s weird’ variety, as if the lopsided tally was a random anomaly rather than a critical failure of our diversity North Star,” he wrote.

Berliner later told the NewsNation host Chris Cuomo that he was not surprised by the negative response he had received from NPR editorial management, saying, “they’re certainly entitled to their perspective.”

But, he added, “I’ve had a lot of support from colleagues, and many of them unexpected, who say they agree with me. Some of them say this confidentially, but I think there’s been a lot of response saying, look, these are things that need to be addressed.”

In her letter to staff, Chapin wrote that NPR’s efforts to expand the diversity of perspectives and subjects now included tracking sources. “We have these internal debates, enforce strong editorial standards, and engage in processes that measure our work precisely because we recognize that nobody has the ‘view from nowhere.’”

  • US public radio

Most viewed

Read the Latest on Page Six

Recommended

Breaking news, npr whistleblower uri berliner claims colleagues ‘confidentially’ agree with him about broadcaster’s hard-left bias.

  • View Author Archive
  • Email the Author
  • Get author RSS feed

Contact The Author

Thanks for contacting us. We've received your submission.

Thanks for contacting us. We've received your submission.

The veteran National Public Radio journalist who blew the whistle on the broadcaster’s overt liberal bias said that he has heard from colleagues who secretly agree with him but can’t go public with their criticisms.

Uri Berliner, an award-winning business editor and reporter during his 25-year career at NPR, said his essay in Bari Weiss’ online news site The Free Press generated “a lot of support from colleagues, and many of them unexpected, who say they agree with me.”

“Some of them say this confidentially,” Berliner told NewsNation anchor Chris Cuomo on Tuesday.

Berliner said that he wrote the essay partly because “we’ve been too reluctant, too frightened to, too timid to deal with these things.”

NPR veteran Uri Berliner called out his own employer over its liberal bias.

“And I think that this is, this is the right opportunity to bring it all out in the open.”

In the essay — titled “I’ve Been at NPR for 25 years. Here’s How We Lost America’s Trust” — Berliner said that among editorial staff at NPR’s Washington, DC, headquarters, he counted 87 registered Democrats and no Republicans.

He wrote that he presented these findings to his colleagues at a May 2021 all-hands editorial staff meeting.

“When I suggested we had a diversity problem with a score of 87 Democrats and zero Republicans, the response wasn’t hostile,” Berliner wrote.

Berliner told NewsNation host Chris Cuomo he wasn't "worried" about his job.

“It was worse.”

Berliner wrote that his colleagues reacted with “profound indifference.”

“I got a few messages from surprised, curious colleagues,” he wrote. “But the messages were of the ‘oh wow, that’s weird’ variety, as if the lopsided tally was a random anomaly rather than a critical failure of our diversity North Star.”

Berliner accused his bosses at NPR of allowing their pro-Democrat political leanings to seep into editorial judgments, including its decision to turn a blind eye to the Hunter Biden laptop story.

Start your day with the latest business news right at your fingertips

Subscribe to our daily Business Report newsletter!

Thanks for signing up!

Please provide a valid email address.

By clicking above you agree to the Terms of Use and Privacy Policy .

Never miss a story.

The Post was the first to report on the existence of the laptop , which contained emails that shed light on Hunter Biden’s business relationships overseas.

Former national security officials opposed to Trump signed a letter claiming that the laptop story was the product of Russian disinformation. Still, independent investigators and the FBI later confirmed that the emails and the contents of the computer were authentic — confirming The Post’s reporting.

According to Berliner, senior editors at NPR refused to cover the Hunter Biden story for fear that it would help Trump’s re-election chances just weeks before voters cast their ballots in the fall of 2020.

NPR issued a memo to employees defending its editorial judgment in response to Berliner's essay,.

He wrote that NPR had skewed so far to the left that it played up Russia collusion allegations against Donald Trump while giving scant attention to the findings by special counsel Robert Mueller, who recommended no criminal charges against the Trump campaign.

When asked by Cuomo if he fears for his job given the fact that he’s a “white guy” who’s “not 18 [years old],” Berliner said: “I’m not worried.”

“I think people want open dialogue…[and] honest debates,” Berliner told Cuomo. “There’s a hunger for this. Most people are not locked into ideologies, and I think many people are just sick of it.”

Berliner said that while NPR reporters were always liberal, it was never as evident in their work as it has been in recent years.

“We were kind of nerdy and really liked to dig into things and understand the complexity of things, I think, that’s evolved over the years into a much narrower kind of niche thinking,” he told Cuomo.

Berliner wrote that NPR refused to cover the Hunter Biden laptop story, which was broken by The Post.

Berline said that NPR has been plagued by “a group think that’s really clustered around very selective, progressive views that don’t allow enough air enough, enough spaciousness to consider all kinds of perspectives.”

Berliner’s bosses responded to the claims of bias, saying they “strongly disagree” with his take that NPR suffered from “the absence of viewpoint diversity.”

“We’re proud to stand behind the exceptional work that our desks and shows do to cover a wide range of challenging stories,” NPR’s chief news executive, Edith Chapin, wrote to employees.

Chapin wrote that “none of our work is above scrutiny or critique.”

“We must have vigorous discussions in the newsroom about how we serve the public as a whole,” she told employees in a memo.

A spokesperson for NPR said the agency would have no further comment.

When asked by Cuomo about management’s memo, Berliner said he was “not surprised” by the response, which came from “the same managers that I’ve been making a lot of these points about.”

Share this article:

NPR veteran Uri Berliner called out his own employer over its liberal bias.

Advertisement

essay on bias in media

  • New Terms of Use
  • New Privacy Policy
  • Your Privacy Choices
  • Closed Caption Policy
  • Accessibility Statement

This material may not be published, broadcast, rewritten, or redistributed. ©2024 FOX News Network, LLC. All rights reserved. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Legal Statement . Mutual Fund and ETF data provided by Refinitiv Lipper .

NPR whistleblower essay exposed 'reluctance' by journalists to admit mistakes, argues Chicago editorial board

Npr editor uri berliner accused his employer of liberal bias in a bombshell piece.

NPR defends reporting after veteran editor slams anti-Trump bias

NPR defends reporting after veteran editor slams anti-Trump bias

‘Outnumbered’ co-hosts react to an NPR senior editor calling out the outlet’s alleged bias during an interview with Bari Weiss.

The NPR bombshell essay spotlighted things worse than liberal bias, the Chicago Tribune claimed. It exposed a refusal to be wrong.

On Tuesday, NPR editor Uri Berliner issued a lengthy rebuke of NPR's media coverage of major news stories over the last few years, such as the Hunter Biden laptop and the COVID lab leak theory, and called out the outlet's "efforts to damage" Trump's presidency.

The article caused a scandal at NPR, as several people claimed this exposed the outlet as a den of liberal bias despite presenting itself as a neutral source for years.

The Chicago Tribune agreed that the "nuanced and thoughtful" essay by Berliner presented evidence of NPR’s liberal bias but went further to decry its journalists’ "reluctance" to even admit it was wrong in those above stories.

Donald Trump, NPR sign, Hunter Biden

An NPR editor spoke out against his own outlet about their past media coverage of Trump and Russia, the Hunter Biden laptop story and more. (Left: (Photo by Spencer Platt/Getty Images) Center: (Photo by Brooks Kraft LLC/Corbis via Getty Images), Right: (Photo by Kent Nishimura/Getty Images))

NPR EDITOR'S BOMBSHELL ESSAY CAUSING 'TURMOIL' AT LIBERAL OUTLET: REPORT

"We suspect few of our readers would be surprised to hear evidence that NPR has a liberal bias, both nationally and within its local affiliates. And we’ll point out that in all three of the cases cited above, the issue perhaps wasn’t so much political bias so much as a reluctance to admit mistakes had been made in past coverage or follow up sufficiently when there’s new evidence," the editorial read. "We journalists hate to fess up as a breed; only the best of us do so in a timely and complete way. In all three cases, those same charges also have been credibly leveled against The New York Times and others. Even many progressive journalists in many newsrooms quietly acknowledge those errors. The pendulum swung too far, and it’s swung back only a little."

Part of the reason, the publication argued, included the "chicken-and-egg segmentation of the audience" where progressive journalists are increasingly targeting an equally progressive audience. Another included a rejection of open-minded thinking , particularly for issues involving race or Donald Trump.

"Part of the answer has to be the rise of critical race theory and the George Floyd-induced reckoning, wherein old-line centrism came to be seen by many on the left as unhelpful at best or a continuance of historical racism at worst," the publication wrote.

It added, "And a big part of the blame goes to Donald J. Trump, who convinced plenty of young journalists he was such a threat to democracy that refusing to write a story which might help him win the presidency was a patriotic act. Of course, that only backfired, as we all now can see. But plenty of smart, leftist journalists still openly decry ‘bothsidesism,’ once a defining ethos of journalists in a free society."

NPR BOSS REBUKES EDITOR'S BOMBSHELL ESSAY: QUESTIONING OUR INTEGRITY IS 'PROFOUNDLY DISRESPECTFUL'

Reached for comment, an NPR spokesperson directed Fox News Digital to a memo to staff by editor-in-chief Edith Chapin, where she said she and her team "strongly disagree" with Berliner's assessment of the quality of NPR's journalism and integrity.

NPR

NPR defended its reporting in a statement to Fox News Digital. (Getty Images)

"We’re proud to stand behind the exceptional work that our desks and shows do to cover a wide range of challenging stories. We believe that inclusion — among our staff, with our sourcing, and in our overall coverage — is critical to telling the nuanced stories of this country and our world," she wrote.

CLICK HERE TO GET THE FOX NEWS APP

Fox News' Hanna Panreck contributed to this report.

Lindsay Kornick is an associate editor for Fox News Digital. Story tips can be sent to [email protected] and on Twitter: @lmkornick.

Fox News First

Get all the stories you need-to-know from the most powerful name in news delivered first thing every morning to your inbox

You've successfully subscribed to this newsletter!

IMAGES

  1. Media Bias Essay

    essay on bias in media

  2. Introducing the AllSides Media Bias Chart

    essay on bias in media

  3. Media Bias Essay Example

    essay on bias in media

  4. Video Essay

    essay on bias in media

  5. Media Essay

    essay on bias in media

  6. Media Bias in Politics: Shaping Opinions and Elections Free Essay Example

    essay on bias in media

VIDEO

  1. I Bought a Mountain by Thomas Firbank · Audiobook preview

  2. 🔥BIAS MEDIA🔥WORKING FI 🔥ANDREW HOLNESS JLP PARTY 😢😢💔

  3. News Bias Essay

  4. Why Pin is my favorite B.F.D.I character!

  5. Media Bias Exposed

COMMENTS

  1. 80 Media Bias Essay Topic Ideas & Examples

    The bias in this article is aimed at discrediting mainstream media's coverage of Clinton's campaign while praising the conservative actions of the GOP presidential candidate. Modern Biased Media: Transparency, Independence, and Objectivity Lack. The mass media is considered to be the Fourth Estate by the majority of people.

  2. Media Bias In News Report: [Essay Example], 667 words

    Conclusion. Media bias in news reporting is a multifaceted issue that warrants careful examination. While biases are an inherent aspect of human perception, they can be mitigated through conscious efforts by journalists and media organizations. By diversifying newsrooms, fostering transparency, and engaging in robust fact-checking, the media ...

  3. 35 Media Bias Examples for Students (2024)

    By Chris Drew (PhD) / October 1, 2023 / Leave a Comment. Media bias examples include ideological bias, gotcha journalism, negativity bias, and sensationalism. Real-life situations when they occur include when ski resorts spin snow reports to make them sound better, and when cable news shows like Fox and MSNBC overtly prefer one political party ...

  4. Media Bias Chart

    The AllSides Media Bias Chart™ is based on our full and growing list of over 1,400 media bias ratings. These ratings inform our balanced newsfeed. The AllSides Media Bias Chart™ is more comprehensive in its methodology than any other media bias chart on the Web. While other media bias charts show you the subjective opinion of just one or a ...

  5. Media Bias Essays

    Essays on Media Bias. Essay examples. Essay topics. General Overview. 22 essay samples found. Sort & filter. 1 Liberal Media Bias . 1 page / 630 words . In today's digital age, the media plays a significant role in shaping public opinion and influencing societal narratives. However, the phenomenon of media bias has increasingly come under ...

  6. Should you trust media bias charts?

    The AllSides Chart. The AllSides chart focuses solely on political bias. It places sources in one of five boxes — "Left," "Lean Left," "Center," "Lean Right" and "Right ...

  7. Media Bias Essay

    Media Bias And The Media. or the method for reporting them is termed as Media Bias. It is some of the time said that media tailor the news and as opposed to introducing the truths it shows different purposes of perspectives and sentiments. Media inclination is pervasive or broad and it defies the guidelines of news-casting.

  8. Examples of Media Bias and How to Spot Them

    1. Spin. Spin is a type of media bias that means vague, dramatic or sensational language. When journalists put a "spin" on a story, they stray from objective, measurable facts. Spin is a form of media bias that clouds a reader's view, preventing them from getting a precise take on what happened.

  9. Media Bias

    Media bias in the entertainment is the bias from journalists and the news within the mass media about stories and events reported and the coverage of them. There are biases in most entertainment today, such as, the news, movies, and television. The three most common biases formed in entertainment are political, racial, and gender biases.

  10. Biases Make People Vulnerable to Misinformation Spread by Social Media

    Bias in the brain. Cognitive biases originate in the way the brain processes the information that every person encounters every day. The brain can deal with only a finite amount of information ...

  11. Biases in the News Media: [Essay Example], 901 words

    Biases in The News Media. News media brings to the public what happens in the world. Especially when audiences lack direct experience or knowledge of the occurring events, they tend to rely particularly on the media for obtaining information. This might not mean that media tell us what to think - "people do not absorb media messages ...

  12. How do we raise media bias awareness effectively? Effects of

    The term 'media bias' refers to, in part, non-neutral tonality and word choice in the news. Media Bias can consciously and unconsciously result in a narrow and one-sided point of view. How a topic or issue is covered in the news can decisively impact public debates and affect our collective decision making." Besides, an example of one-sided ...

  13. Media Bias Essays & Research Papers

    The media is biased in both direction depending on the specific media outlets you may access like CNN, MSNBC, and Fox News. The media, including CNN, is biased more towards liberal values. The media's motive is to make money even if what they're promoting is wrong. According to Student News Daily, there is media bias by a selection of sources.

  14. Does the Media Show Bias

    Media Biases Examples. An excellent example of media biases can be shown in three articles describing one single event: an accident on the Norfolk Southern railroad near East Palestine, Ohio. Figure 1 shows three headings from various resources, all dedicated to the same theme, but one can see how different they present it, even by titles ...

  15. Persuasive Essay On Media Bias

    Persuasive Essay On Media Bias. A perception is a thought, belief, or opinion, often held by many people and based on appearances, and media gives people the ability choose how things appear. For both consumers and topics, unknowingly forming opinions based on information displayed in a biased manner is not only bad, but also, in some cases ...

  16. Media Bias

    Deconstructing Media Bias: An In-Depth Analysis Essay Example Media bias is an ever-present concern in today's information-driven world. As consumers of news and information, we are constantly exposed to media outlets that have the power to shape our perceptions and influence our understanding of current events. This essay aims to shed light on ...

  17. Essay On Media Bias

    Argumentative Essay On Media Bias 1019 Words | 5 Pages. Bias is defined as being prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair. Americans experience some sort of bias every day, however, media bias is likely the most prevalent.

  18. The Impact of Media Bias on Public Perception

    This essay delves into the nuanced effects of media bias, focusing on its influence on public interest in politics and its role in disseminating misinformation. The lens through which politicians are portrayed in the media possesses the power to sway public perception and shape political landscapes. Don't use plagiarized sources.

  19. Argumentative Essay On Media Bias

    Argumentative Essay On Media Bias. 1019 Words5 Pages. Bias is defined as being prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair. Americans experience some sort of bias every day, however, media bias is likely the most prevalent. Media can be biased towards liberals ...

  20. Media Bias Essay

    Media Bias Essay. 795 Words4 Pages. Media bias is one of the most prevalent issues in America today; especially to those trying to stay well informed on the events shaping our nation. Media bias is defined by Study.com as "when a media outlet reports a news story in a partial or prejudiced manner" (Dugger, "Media Bias & Criticism ...

  21. NPR in Turmoil After It Is Accused of Liberal Bias

    Mr. Berliner's essay has ignited a firestorm of criticism of NPR on social media, especially among conservatives who have long accused the network of political bias in its reporting.

  22. Fake news, disinformation and misinformation in social media: a review

    Social media outperformed television as the major news source for young people of the UK and the USA. 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017).Furthermore, it has been reported in a previous study about the spread of online news on Twitter ...

  23. The Media Did Not Make Up Trump's Russia Scandal

    The Media Did Not Make Up Trump's Russia Scandal. By Jonathan Chait, who's been a New York political columnist since 2011. NPR reporter Uri Berliner wrote an essay for The Free Press arguing ...

  24. NPR responds after editor says it has 'lost America's trust' : NPR

    NPR defends its journalism after senior editor says it has lost the public's trust. NPR is defending its journalism and integrity after a senior editor wrote an essay accusing it of losing the ...

  25. Editorial: Bombshell piece on liberal bias at NPR has us thinking

    In a nuanced and thoughtful essay on the website The Free Press, founded by Bari Weiss and Nellie Bowles, Berliner detailed what he has seen as egregious liberal bias at his employer. Among ...

  26. Media Bias and Democracy

    The media should strive to avoid bias in its reporting to ensure that it is perceived as fair and objective by all parties. This can help to build trust in the media and promote its role in democracy. Addressing Online Harassment of Journalists: Journalists are facing online harassment due to the rise of social media.

  27. Senior NPR editor claims public broadcaster lacks 'viewpoint diversity

    A debate about media bias has broken out at National Public Radio after a longtime employee published a scathing letter accusing the broadcaster of a "distilled worldview of a very small segment ...

  28. NPR whistleblower Uri Berliner said colleagues secretly agree with him

    Published April 10, 2024, 3:50 p.m. ET. The veteran National Public Radio journalist who blew the whistle on the broadcaster's overt liberal bias said that he has heard from colleagues who ...

  29. NPR CEO slams editor who exposed bias. Looks like truth is 'profoundly

    Editor's note: This essay was first published on the author's blog Res ipsa loquitur. This weekend, I wrote a column on the continuing controversy at NPR and the bias detailed in a recent ...

  30. NPR whistleblower essay exposed 'reluctance' by journalists to admit

    The NPR bombshell essay spotlighted things worse than liberal bias, the Chicago Tribune claimed. It exposed a refusal to be wrong. On Tuesday, NPR editor Uri Berliner issued a lengthy rebuke of ...