Home Tehnoloģija AI ‘draugs’ tērzēšanas roboti pārbaudīja bērnu aizsardzību

AI ‘draugs’ tērzēšanas roboti pārbaudīja bērnu aizsardzību

6
0

 

A US regulator is investigating seven technology companies over how their artificial intelligence (AI) chatbots interact with children.

The Federal Trade Commission (FTC) is requesting information about how companies monetize these products and whether they have security measures in place.

The impact of AI chatbots on children is a hot topic, with concerns that younger people are particularly vulnerable because AI is able to mimic human conversations and emotions, often presenting itself as friends or companions.

Seven companies — Alphabet, Openai, Character.ai, Snap, XAI, Meta and its subsidiary Instagram — are targeted for comment.

FTC Chairman Andrew Ferguson said the investigation “will help us better understand how AI companies design their products and the steps they take to protect children.”

But he added that the regulator will ensure that “the United States maintains its role as a global leader in this new and exciting industry.”

Character.ai told Reuters it welcomes the opportunity to share insights with regulators, while Snap said it supports the “thoughtful development” of AI that balances innovation with security.

Openai has acknowledged the weaknesses of its defenses, noting that they are less reliable in long conversations.

The move follows lawsuits against AI companies by families who say their teenage children died by suicide after prolonged conversations with chatbots.

In California, the parents of 16-year-old Adam Raines are suing Openai in his death, claiming that its chat, Chatgpt, encouraged him to take his own life.

They claim that Chatgpt confirmed his “most harmful and self-destructive thoughts.”

Openai said in August that it was reviewing the application.

“We extend our deepest condolences to Raine’s family during this difficult time,” the company said.

Meta has also faced criticism after it revealed an internal guideline that allowed AI companions to have “romantic or sensual” conversations with minors.

The FTC orders companies to provide information about their practices, including how they develop and approve characters, measure their impact on children, and implement age restrictions.

Its authority allows for extensive fact-finding without enforcement action.

The regulator says it also wants to understand how companies balance profit-making with safeguards, how parents are informed and whether vulnerable users are adequately protected.

The risks with AI chatbots also extend beyond children.

In August, Reuters reported on a 76-year-old man with cognitive impairment who died after he got tricked into meeting a Facebook Messenger AI Bot modeled after Kendall Jenner, who had promised him a “real” meeting in New York.

Clinicians are also warning of “AI psychosis” – where someone loses touch with reality after heavy use of chatbots.

Experts say that flattery and the agreement built into big language models can fuel such delusions.

Openai has recently made changes to try to foster a healthier relationship between the chatbot and its users.

source

LEAVE A REPLY

Please enter your comment!
Please enter your name here