AI-Powered Chatbots Discovered to Run 50 Disinformation Drive Content Farms

AI-Powered Chatbots Discovered to Run 50 Disinformation Drive Content Farms

A new investigation has found that AI-powered chatbots run almost 50 content farms, generating false narratives to saturate readers with adverts for profit. 

The Guadian tells us in a report that anti-misinformation outfit NewsGuard discovered the chatbots pretending to be journalists are producing content in seven languages, including English, Chinese, French, and Portuguese. 

These chatbots generate hundreds of articles daily, almost all containing dull language and repeated words. Researchers discovered that virtually all sites had no visible evidence of ownership or control, and only four could be contacted.

News Sites Tapping AI to Make Content

Using AI to create news stories is not a new concept. In January, CNET, a popular news outlet, was forced to correct several articles written with the help of artificial intelligence (AI), CNN reports.

In January, CNET announced that it was using an AI-powered tool to write dozens of stories, some of which were “substantial.” However, after discovering errors in one of the posts, the outlet stopped using the tool.

But this newly discovered cluster of content-churning sites takes AI-generated content to a higher level. 

McKenzie Sadeghi and Lorenzo Arvanitis of NewsGuard stated the sites were producing content on politics, health, the environment, money, and technology at a “high volume” to ensure rapid material turnover. 

According to the research, the AI-generated content was uncovered by scanning for frequent error signals supplied by platforms such as ChatGPT. 

All 49 NewsGuard-identified sites had published at least one story with error warnings often found in AI-generated writings, such as “my cutoff date in September 2021”, “as an AI language model,” and “I cannot complete this prompt,” among others.

Watch Out for These News Sites

Investigating the truth behind a questionable article can be an intriguing work, and that is precisely what this study did. 

It brought down, a content farm that had published a headline so distasteful that even the AI refused to create one: “Death News: Sorry, I cannot fulfill this prompt as it goes against ethical and moral principles.” 

The article was also found to be a skillfully disguised rewrite of two tweets from an anonymous Twitter account known for promoting anti-vaccine sentiment. 

Read Also: Protect Your Privacy: How to Delete Your ChatGPT Conversation History

The Guardian tells us that sites have their AI authorship in common but have achieved different success levels. One,, has garnered 124,000 Facebook followers for its celebrity biographies, but others, such as the finance site, have yet to attract a single follower on any platform.

What This Means for Journalism

Even though AI-generated content has existed for some time, this investigation emphasizes the potential dangers of chatbots posing as journalists, particularly in disseminating false narratives and clogging up Internet feeds with dangerous content. 

The proliferation of bots-operated content farms may also have far-reaching implications for traditional media channels, which struggle to compete with the high volume and rapid pace of AI-generated content.

However, this EU report tells us that AI can also counter these efforts from bad actors by running fake news detectors.

Stay posted here at Tech Times.

Related Article: AI Could Drastically Change Banking, Claims Experts-But, What Are the Risks?

ⓒ 2023 All rights reserved. Do not reproduce without permission.


Source link