Experts Caution Against Sharing Too Much to AI Chatbots

Experts Caution Against Sharing Too Much to AI Chatbots

Experts are warning against sharing too much information with AI chatbots as privacy concerns continue to mount, according to a report by The Guardian.

Experts warn that AI chatbots may be able to collect vast amounts of data, which can be used for targeted advertising and pose multiple security risks, including the ability to help criminals perform more convincing cyber-attacks.

While search engines like Google have long been criticized for their data-collection practices, experts suggest that chatbots could be even more data-hungry.

(Photo: Gerd Altmann/ Pixabay )

Disarming to Users

People may be taken aback by their conversational character and compelled to divulge more information than they would have in a search engine. According to Ali Vaziri, a legal director on the data and privacy team of the law firm Lewis Silkin, the human-like nature of chatbots can be disarming to users.

Chatbots have a reputation for gathering text, voice, and device data and information that can identify a user’s location, like their IP address, as noted by The Guardian.

Chatbots collect information as search engines do, such as social media activity that can be connected to a user’s email and phone number. 

“As data processing gets better, so does the need for more information, and anything from the web becomes fair game,” says Dr. Lucian Tipi, associate dean at Birmingham City University.

Individuals are profiled using the information gathered by chatbots to receive relevant advertisements. The AI chatbot’s system is fed by micro-calculations whenever a user requests assistance.

According to Jake Moore, a worldwide cybersecurity expert at the software company ESET, these identifiers are examined and might be used to target users with advertisements.

Microsoft recently disclosed that it is considering adding advertisements to Bing Chat. Microsoft has modified its privacy policy to reflect that employees can read users’ chatbot interactions.

Concerns about using chatbots have increased in light of Italy’s recent prohibition on ChatGPT for privacy reasons. The owner of ChatGPT, OpenAI, has raised concerns about the model it uses. The Italian data regulator said it would investigate whether the company has violated stringent European data protection regulations.

According to Ron Moscona, a partner at the law firm Dorsey & Whitney, ChatGPT’s privacy policy “does not appear to open the door for commercial exploitation of personal data.” Still, Google’s more comprehensive privacy policy permits it to use data for providing users with targeted advertising.

Read Also: UNESCO Pushes for Global AI Framework Following Calls to Pause Development

“Considerable Care”

Will Richmond-Coggan, a data, privacy, and AI specialist at the legal company Freeths, claims that the technology is still too young for us to know it is secure and private.

Before disclosing any data, he advises “considerable care,” especially if it is private or business-related. 

Chatbot creators such as OpenAI and Microsoft say their chatbot products can be used safely. The latter is “thoughtful about how it uses your data” while maintaining the rules and security measures from conventional search in Bing.

Related Article: ChatGPT NOT to Be Trusted by Marketers, Investors; DataTrek Explains Why


ⓒ 2023 All rights reserved. Do not reproduce without permission.

Source link