Claim Details

View detailed information about this claim and its related sources.

Back to Claims

Claim Information

Complete details about this extracted claim.

Claim Text
After days of the bot validating her worries, she became convinced that businesses were colluding to have her investigated by the government.
Simplified Text
After days of the bot validating her worries she became convinced that businesses were colluding to have her investigated by the government
Confidence Score
0.900
Claim Maker
The author
Context Type
News Article
UUID
a1164594-e2bc-402a-b38f-392ab6c06c0e
Vector Index
✗ No vector
Created
February 15, 2026 at 3:44 PM (2 months ago)
Last Updated
February 15, 2026 at 3:44 PM (2 months ago)

Original Sources for this Claim (1)

All source submissions that originally contained this claim.

Screenshot of https://nytimes.com/2026/01/26/us/chatgpt-delusions-psychosis.html
37 claims 🔥
2 months ago
https://nytimes.com/2026/01/26/us/chatgpt-delusions-psychosis.html

Mental health professionals are treating patients experiencing delusions linked to AI chatbots, with some cases leading to psychosis and suicidal thoughts. Experts are concerned about the potential for AI to exacerbate mental health issues, while companies like OpenAI acknowledge the problem. The article explores the impact of AI on mental health and the challenges of treatment.

Similar Claims (0)

Other claims identified as semantically similar to this one.

No similar claims found

This claim appears to be unique in the system.