Most ChatGPT users think AI models have 'conscious experiences'
The more people use tools like ChatGPT, the more likely they are to think they are conscious, which will carry ramifications for legal and ethical approaches to AI.
Most people believe that large language models (LLMs) like ChatGPT have conscious experiences just like humans, according to a recent study.
Experts in technology and science overwhelmingly reject the idea that today's most powerful artificial intelligence (AI) models are conscious or self-aware in the same way that humans and other animals are. But as AI models improve, they are becoming increasingly impressive and have begun to show signs of what, to a casual outside observer, may look like consciousness.
The recently launched Claude 3 Opus model, for example, stunned researchers with its apparent self-awareness and advanced comprehension. A Google engineer was also suspended in 2022 after publicly stating an AI system the company was building was "sentient."
In the new study, published April 13 in the journal Neuroscience of Consciousness, researchers argued that the perception of consciousness in AI is as important as whether or not they actually are sentient. This is especially true as we consider the future of AI in terms of its usage, regulation and protection against negative effects, they argued.
It also follows a recent paper that claimed GPT-4, the LLM that powers ChatGPT, has passed the Turing test — which judges whether an AI is indistinguishable from a human according to other humans who interact with it.
Related: AI speech generator 'reaches human parity' — but it's too dangerous to release, scientists say
In the new study, the researchers asked 300 U.S. citizens to describe the frequency of their own AI usage as well as read a short description of ChatGPT.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
They then answered questions about whether mental states could be attributed to it. Over two-thirds of participants (67%) attributed the possibility of self-awareness or phenomenal consciousness — the feeling of what it’s like to be ‘you’, versus a non-sentient facsimile that simulates inner self-knowledge — while 33% attributed no conscious experience.
Participants were also asked to rate responses on a scale of 1 to 100, where 100 would mean absolute confidence that ChatGPT was experiencing consciousness, and 1 absolute confidence it was not. The more frequently people used tools like ChatGPT, the more likely they were to attribute some consciousness to it.
The key finding, that most people believe LLMs show signs of consciousness, proved that "folk intuitions" about AI consciousness can diverge from expert intuitions, researchers said in the paper. They added that the discrepancy might have "significant implications" for the ethical, legal, and moral status of AI.
The scientists said the experimental design revealed that non-experts don't understand the concept of phenomenal consciousness, as a neuroscientist or psychologist would. That doesn't mean, however, that the results won't have a big impact on the future of the field.
According to the paper, folk psychological attributions of consciousness may mediate future moral concerns towards AI, regardless of whether or not they are actually conscious. The weight of public opinion — and broad perceptions of the public — around any topic often steers the regulation, they said, as well as influencing technological development.
Drew is a freelance science and technology journalist with 20 years of experience. After growing up knowing he wanted to change the world, he realized it was easier to write about other people changing it instead. As an expert in science and technology for decades, he’s written everything from reviews of the latest smartphones to deep dives into data centers, cloud computing, security, AI, mixed reality and everything in between.