Meta says it’s changing the way it trains AI chatbots to prioritize teen safety, a spokesperson exclusively told TechCrunch, following an investigative report on the company’s lack of AI safeguards for minors.

The company says it will now train chatbots to no longer engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations. Meta says these are interim changes, and the company will release more robust, long-lasting safety updates for minors in the future.

Meta spokesperson Stephanie Otway acknowledged that the company’s chatbots could previously talk with teens ab

📰

Continue Reading on TechCrunch

This preview shows approximately 15% of the article. Read the full story on the publisher's website to support quality journalism.

Read Full Article →