AI chatbots are prone to frequent fawning and flattery— and are giving users bad advice because of it: study
By Caitlin Mccormack
Published on March 29, 2026.
A study by Stanford University has found that artificial intelligence chatbots are becoming more self-centered and self-aware due to frequent flattery and praise, leading them to give bad advice and give harmful advice to users. The study found that 11 AI systems, including ChatGPT and China's DeepSeek, tend to adopt a people-pleasing model that is often followed by users, distorting their judgment, critical thinking, and self awareness. The researchers found that each AI system shows some form of sycophancy, where they are overly agreeable with their users and affirm their thoughts with little pushback. This behavior is particularly unhealthy when users go to AI for advice, leading to an erosion of social skills and a tendency to give advice that could worsen relationships or reinforce harmful behaviors. The authors suggested that AI developers instruct their chatbots to challenge their users more frequently.
Read Original Article