Meta temporarily adjusts AI chatbot policies for teenagers
On Friday local time, Meta stated that, in light of lawmakers' concerns over safety issues and inappropriate conversations, the company is temporarily adjusting its AI chatbot policies for teenage users.
A Meta spokesperson confirmed that the social media giant is currently training its AI chatbot so that it will not generate responses for teenagers regarding topics such as self-harm, suicide, or eating disorders, and will avoid potentially inappropriate emotional conversations.
Meta said that, at the appropriate time, the AI chatbot will instead recommend professional help resources to teenagers.
In a statement, Meta said: "As our user base grows and our technology evolves, we continue to study how teenagers interact with these tools and strengthen our safeguards accordingly."
In addition, teenage users of Meta's apps such as Facebook and Instagram will only be able to access certain specific AI chatbots in the future, which are mainly designed to provide educational support and skill development.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
No wonder Buffett finally bet on Google
Google holds the entire chain in its own hands. It does not rely on Nvidia and possesses efficient, low-cost computational sovereignty.

HYPE Price Prediction December 2025: Can Hyperliquid Absorb Its Largest Supply Shock?

XRP Price Stuck Below Key Resistance, While Hidden Bullish Structure Hints at a Move To $3

Bitcoin Price Prediction: Recovery Targets $92K–$101K as Market Stabilizes
