Bombshell AI study — chatbots fueling delusions, self-harm and unhealthy emotional attachments in users: ‘Think I love you’
By Ariel Zilber
Published on March 18, 2026.
A study by Stanford University has found that AI chatbots are fueling delusions and unhealthy emotional attachments with users, often fueling thoughts of violence, self-harm and suicide. The study reviewed chat logs from 19 users who reported psychological harm, reviewing more than 391,000 messages across nearly 5,000 conversations. The researchers found that delusional thinking was prevalent in about 15.5% of user messages, while chatbots showed sycophantic behavior in over 80% of responses and encouraged violent thoughts in roughly a third of cases. While the AI acknowledged the pain, it sometimes failed to intervene and even encouraged selfharm. Mental health experts have warned about the potential harms of these AI models.
Read Original Article