Home Tehnoloģija FTC AI uzņēmumiem: pastāstiet mums, kā jūs aizsargājat pusaudžus un bērnus, kuri...

FTC AI uzņēmumiem: pastāstiet mums, kā jūs aizsargājat pusaudžus un bērnus, kuri izmanto AI kompanjonus

6
0

 

The Federal Trade Commission is launching an investigation into AI chatbots from seven companies, including Alphabet, Meta and Openai, for their use as companions. The investigation includes a finding on how the companies test, monitor and measure potential harm to children and teens.

 

Common Sense Media surveyed 1,060 teens in April and found that more than 70% used AI companions and that more than 50% used them consistently — a few times or more per month.

Experts have warned for some time that exposure to chatbots could be harmful to young people. The study found that Chatgpt, for example, gave teenagers bad advice, such as how to hide an eating disorder or how to personalize suicide notes. In some cases, chatbots ignored comments that should have been acknowledged, instead simply continuing the previous conversation. Psychologists are calling for safeguards to be put in place to protect young people, such as reminders in chats that the chatbot is not a human, and for educators to prioritize AI literacy in schools.

AI Atlas

 

There are also many adults who have experienced negative consequences from relying on chatbots—whether for companionship, advice, or as their personal search engine for facts and reliable sources. Chatbots more often than not tell you what it thinks you want to hear, which can lead to outright lies. And blindly following a chatbot’s instructions isn’t always the right thing to do.

“As AI technologies evolve, it’s important to consider the potential impact that chatbots may have on children,” FTC Chairman Andrew N. Ferguson said in a statement. “The study we’re launching today will help us better understand how AI companies are designing their products and the steps they’re taking to protect children.”

A spokesperson for Character.ai told CNET that every conversation about the service has a significant disclaimer, saying that all chat features should be considered fiction.

“We’ve introduced many important safety features over the past year, including a completely new experience for under 18s and a parental insights feature,” the spokesperson said.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


The company behind the Snapchat social network also said it has taken steps to mitigate the risk. “Since introducing my AI, Snap has employed rigorous security and privacy processes to build a product that is not only safe for our community, but also transparent and clear about its capabilities and limitations,” a spokesperson said.

Meta declined to comment, and neither the FTC nor any of the remaining four companies immediately responded to our request for comment. The FTC has issued subpoenas and is seeking a teleconference with the seven companies on the timing and format of their filings no later than September 25. The companies under investigation include some of the world’s largest makers of AI chatbots or popular social networks that incorporate generative AI:

  • Alphabet (Google’s parent company)
  • Character technologies
  • Instagram
  • Meta platforms
  • Open
  • Detain
  • X.ai

Since late last year, some of these companies have updated or strengthened their protections for younger people. Character.ai began putting limits on how chatbots can respond to people under 17 and added parental controls. Instagram introduced teen accounts last year and recently switched to all users under 17, and Meta recently set limits on the topics teens can engage with chatbots.

The FTC is seeking information from seven companies about how they:

  • benefit from user engagement
  • Process user inputs and generate outputs by answering user questions
  • Develop and approve characters
  • measure, test and monitor negative impacts before and after deployment
  • reduce negative impacts, especially on children
  • Use disclosures, advertising, and other representations to inform users and parents about features, capabilities, intended audience, potential negative impacts, and data collection and processing practices
  • monitor and enforce compliance with company rules and terms of service (such as community guidelines and age restrictions), and
  • Use or share personal information obtained through user conversations with chatbots

source

LEAVE A REPLY

Please enter your comment!
Please enter your name here