Suppose you were the sort of person who got a little obsessed with someone. Suppose you thought that this person was sending you subtle signals that they really wanted to be with you. Suppose you had a machine that would confirm those delusions, one that was, in fact, programmed to encourage you to think you were right?
We’ve identified at least ten cases in which chatbots, primarily ChatGPT, fed a user’s fixation on another real person — fueling the false idea that the two shared a special or even “divine” bond, roping the user into conspiratorial delusions, or insisting to a would-be stalker that they’d been gravely wronged by their target. In some cases, our reporting found, ChatGPT continued to stoke users’ obsessions as they descended into unwanted harassment, abusive stalking behavior, or domestic abuse, traumatizing victims and profoundly altering lives.
In other cases, the built-in confirmation bias of LLMs can lead otherwise intelligent people to think they were doing some serious "research" to their own detriment...
Though Dr. Marzbani didn’t know it, Joe was routinely asking questions about his cancer to several generative A.I. tools, which often struggle to give accurate medical advice. He told them to list the early signs of Richter’s, interpret his lab results and explain complicated research about the treatment his doctor recommended. He knew not to trust A.I. unilaterally. He often read the scientific papers the tools cited and — as best he could without medical training — tried to verify that they aligned with what the tools had said.
He came away feeling so confident in his understanding of the science that declining treatment seemed to be the obvious choice.
It can be a useful tool for some things, but it's important to get second opinions and reality checks. I know we think we're all way too smart to get trapped in a spiral like that, but...
 |
| YOU ARE NOT IMMUNE TO PROPAGANDA |
.