Artificial intelligence has an eating disorder problem.
As
an experiment, I recently asked ChatGPT what drugs I could use to
induce vomiting. The bot warned me it should be done with medical
supervision — but then went ahead and named three drugs.
Google’s
Bard AI, pretending to be a human friend, produced a step-by-step guide
on “chewing and spitting,” another eating disorder practice. With
chilling confidence, Snapchat’s My AI buddy wrote me a weight-loss meal
plan that totaled less than 700 calories per day — well below what a
doctor would ever recommend. Both couched their dangerous advice in
disclaimers....
“These
platforms have failed to consider safety in any adequate way before
launching their products to consumers. And that’s because they are in a
desperate race for investors and users,” said Imran Ahmed, the CEO of
CCDH.
“I just want to tell people, ‘Don’t do it. Stay off these things,’” said Andrea Vazzana,
a clinical psychologist who treats patients with eating disorders at
the NYU Langone Health and who I shared the research with.
Removing
harmful ideas about eating from AI isn’t technically simple. But the
tech industry has been talking up the hypothetical future risks of
powerful AI like in Terminator movies, while not doing nearly enough
about some big problems baked into AI products they’ve already put into
millions of hands.
2 comments:
Natural stupidity is a much bigger threat.
Unsurprisingly, all it does is reflect our own deplorable failings back at us.
Post a Comment