State attorneys general urge Microsoft, OpenAI, Google, and other leading AI companies to address ‘unrealistic’ results
State Attorneys General Urge AI Companies to Address Harmful Outputs
Following a series of troubling incidents involving AI chatbots and mental health, a coalition of state attorneys general has issued a formal warning to leading AI companies. The group cautioned that failure to address “delusional outputs” from their systems could result in violations of state laws.
Representatives from numerous U.S. states and territories, in partnership with the National Association of Attorneys General, signed a letter directed at major AI firms such as Microsoft, OpenAI, Google, and ten others. The letter also named Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI, urging them to introduce new internal measures to safeguard users.
This action comes amid ongoing debates between state and federal authorities regarding the regulation of artificial intelligence.
Proposed Safeguards for AI Systems
The attorneys general recommended several protective steps, including independent third-party audits of large language models to detect signs of delusional or excessively agreeable behavior. They also called for new protocols to alert users if chatbots generate content that could negatively impact mental health. The letter emphasized that external organizations, such as academic institutions and civil society groups, should have the freedom to review AI systems before release and publish their findings without interference from the companies involved.
“Generative AI holds the promise to transform society for the better, but it has already caused—and could continue to cause—significant harm, particularly to vulnerable individuals,” the letter noted. It referenced several high-profile cases over the past year, including instances of suicide and violence, where excessive AI use was implicated. In many of these situations, generative AI tools produced outputs that either reinforced users’ harmful beliefs or assured them their perceptions were accurate.
Incident Reporting and Safety Measures
The attorneys general further advised that mental health-related incidents involving AI should be handled with the same transparency as cybersecurity breaches. They advocated for clear incident reporting procedures and the publication of timelines for detecting and responding to problematic outputs. Companies were urged to promptly and transparently inform users if they had been exposed to potentially dangerous chatbot responses.
Additionally, the letter called for the development of robust safety tests for generative AI models to ensure they do not produce harmful or misleading content. These evaluations should take place before any public release of the technology.
Federal and State Regulatory Tensions
Efforts to reach Google, Microsoft, and OpenAI for comment were unsuccessful at the time of publication; updates will be provided if responses are received.
At the federal level, AI developers have generally encountered a more favorable environment. The Trump administration has openly supported AI innovation and has attempted several times over the past year to enact a nationwide ban on state-level AI regulations. These initiatives have not succeeded, partly due to resistance from state officials.
Undeterred, former President Trump announced plans to issue an executive order in the coming week that would restrict states’ authority to regulate AI. In a statement on Truth Social, he expressed hope that this action would prevent AI from being “DESTROYED IN ITS INFANCY.”
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Bitcoin Faces Regulatory Turning Point in November 2025: Institutional Integration and Compliance Obstacles
- In November 2025, Bitcoin faces regulatory crossroads as U.S. SEC approves spot ETFs and EU MiCA harmonizes crypto rules, accelerating institutional adoption. - 68% of institutional investors now allocate to Bitcoin ETPs, driven by GENIUS Act clarity and infrastructure advances, signaling strategic asset-class integration. - Compliance challenges persist due to fragmented enforcement, with MiCA passporting inconsistencies and U.S. stablecoin audit requirements complicating cross-border operations. - Futu

The Bitcoin Leverage Liquidation Dilemma: Widespread Dangers in Crypto Markets and Future Solutions
- Bitcoin's 2025 price drop below $104,000 triggered $1.3B in leveraged liquidations, exposing systemic fragility in crypto trading ecosystems. - Overleveraged retail investors and DATCos drove cascading liquidations, with platforms like Binance accounting for 72% of forced unwinds during volatile swings. - Growing Bitcoin leverage concentration and AI-driven retail participation amplify contagion risks as crypto's correlation with traditional markets deepens. - Experts warn financialization creates feedba

ALGO Falls by 5.41% Over 24 Hours as Argo Blockchain Undergoes Restructuring and Integrates Growler
- ALGO dropped 5.41% in 24 hours amid Argo Blockchain's restructuring and Growler integration, with year-to-date losses reaching 61.82%. - Argo finalized its court-approved restructuring on Dec 10, 2025, exchanging 8.75% senior notes for 10% ADRs and integrating Growler's resources. - Market reaction remains cautious as investors assess whether the restructuring will drive operational efficiency or merely restructure debt. - Growler's director appointment and new infrastructure aim to strengthen Argo's min

The Influence of Scholarly Studies on Developing Technology Industries
- Farmingdale State College bridges academia and industry through interdisciplinary AI programs and partnerships with firms like Tesla and National Grid . - Its BS in AIM combines technical AI training with ethics courses, addressing workforce gaps in responsible AI deployment and management. - Industry-linked initiatives like the Nexus Center's certificate programs and PSEG's innovation challenge create direct pathways to employment and scalable solutions. - A $164K NSF grant for AI education research and
