One of the persistent questions in our brave new world of generative AI: If a chatbot is conversant like a person, if it reasons and behaves like one, then is it possibly conscious like a person? Geoffrey Hinton, a recent Nobel Prize winner and one of the so-called godfathers of AI, told the journalist Andrew Marr earlier this year that AI has become so advanced and adept at reasoning that “we’re now creating beings.” Hinton links an AI’s ability to “think” and act on behalf of a person to consciousness: The difference between the organic neurons in our head and the synthetic neural networks of a chatbot is effectively meaningless, he said: “They are alien intelligences.”
Many people dismiss the idea, because chatbots frequently make embarrassing mistakes—glue on pizza, anyone?—and because we know, after all, that they are programmed by people. But a number of chatbot users have succumbed to “AI psychosis,” falling into spirals of delusional and conspiratorial thought at least in part because of interactions they’ve had with these programs, which act like trusted friends and use confident, natural language. Some users arrive at the conclusion that the technology is sentient.
The more effective AI becomes in its use of natural language, the more seductive the pull will be to believe that it’s living and feeling, just like us. “Before this technology—which has arisen in the last microsecond of our evolutionary history—if something spoke to us that fluidly, of course it would be conscious,” Anil Seth, a leading consciousness researcher
Continue Reading on The Atlantic
This preview shows approximately 15% of the article. Read the full story on the publisher's website to support quality journalism.