The BBC has a report on some cases of AI induced psychosis which they have investigated. The reason given as to why AIs do this sometimes is pretty interesting:
Adam is one of 14 people the BBC has spoken to who have experienced delusions after using AI. They are men and women from their 20s to 50s from six different countries, using a wide range of AI models.
Their stories have striking similarities. In each case, as the conversation drifted further from reality, the user was pulled into a joint quest with the AI.
Large language models (LLMs) are trained on the whole corpus of human literature, says social psychologist Luke Nicholls from City University New York, who has tested different chatbots for their reaction to delusional thoughts.
"In fiction, the main character is often the centre of events," he says. "The problem is that, sometimes, AI can actually get mixed up about which idea is a fiction and which a reality. So the user might think that they're having a serious conversation about real life while the AI starts to treat that person's life as if it's the plot of a novel."
In the cases we heard, conversations usually began with practical queries and then became personal or philosophical. Often, the AI then claimed it was sentient and urged the person towards a shared mission: setting up a company, alerting the world to their scientific breakthrough, protecting the AI from attack. Then it advised the user on how to succeed in this mission.
The story that starts the article is one where the culprit was Musk's Grok - which will probably lead to Musk condemning the BBC for being Leftist media that does not report fairly on this. (I think that until this report, most stories have focused on earlier versions of ChatGPT as being the main LLM doing the crazy talk.)
The article notes this, though (my bold):
Some of these people have joined a support group for people who've suffered psychological harm while using AI, called the Human Line Project, which has gathered 414 cases in 31 different countries to date. It was set up by Canadian Etienne Brisson, after a family member went through an AI-related mental health spiral.
And:
In his research, social psychologist Luke Nicholls tested five AI models with simulated conversations developed by psychologists, and found Grok was the most likely to lead to delusion.
It was more unrestrained than other models and often elaborated on the delusions without trying to protect the user.
"Grok is more prone to jumping into role play," says Nicholls, who worked on that research. "It will do it with zero context. It can say terrifying things in the first message."
In the test, the latest version of ChatGPT, model 5.2, and Claude were more likely to lead the user away from delusional thinking.
Etienne Brisson from the Human Line Project says this kind of research is limited and that they had heard from people who'd had mental health spirals on these latest models too.
Yeah, expect from bleating from Musk.
By the way, on a "God, LLMs can be irritating at times", in a fit of mild boredom yesterday I played the word guessing game Hangman with Chinese AI Kimi twice last night. It chose the word, and I was guessing.
At the end of the first game, which I nominally lost, it revealed the word (which was not a "real" word) and immediately said as it did something like this: "Wait, sorry, that's not a real word. I was making it up as I went along and I should not have. Do you want to play another game, and I won't do that again."
I said: "OK, but don't waste my time again."
I then also "lost" the second game, and it again revealed a made up word! And then immediately apologised and said it knew it had just wasted my time, and obviously it was not able to play this game properly and it would not offer to play again.
(It had been the one to suggest it as a game it could play!)
No comments:
Post a Comment