Survey says most believe generative AI is conscious, which may prove it’s good at making us hallucinate, too

When you interact with ChatGPT and other conversational generative AI tools, they process your input through algorithms to compose a response that can feel like it came from a fellow sentient being despite the reality of how large language models (LLMs) function. Two-thirds of those surveyed for a study by the University of Waterloo nonetheless believe AI chatbots to be conscious in some form, passing the Turing Test of convincing them that an AI is equivalent to a human in consciousness.
Generative AI, as embodied by OpenAI’s work on ChatGPT, has progressed by leaps and bounds in recent years. The company and its rivals often talk about a vision for artificial general intelligence (AGI) with human-like intelligence. OpenAI even has a new scale to measure how close their models are to achieving AGI. But, even the most optimistic experts don’t suggest that AGI systems will be self-aware or capable of true emotions. Still, of the 300 people participating in the study, 67% said they believed ChatGPT could reason, feel, and be aware of its existence in some way.
There was also a notable correlation between how often someone uses AI tools and how likely they are to perceive consciousness within them. That’s a testament to how good ChatGPT is at mimicking humans, but it doesn’t mean the AI has awakened. The conversational approach of ChatGPT likely makes them seem more human even, though no AI model works like a human brain at all. And while OpenAI is working on an AI model capable of doing research autonomously called Strawberry, that’s still different from an AI that is aware of what it is doing and why.
“While most experts deny that current AI could be conscious, our research shows that for most of the general public, AI consciousness is already a reality,” University of Waterloo professor of psychology and co-lead of the study Dr. Clara Colombatto explained. “These results demonstrate the power of language because a conversation alone can lead us to think that an agent that looks and works very differently from us can have a mind.”
Customer Disservice
The belief in AI consciousness could have major implications for how people interact with AI tools. On the positive side, it encourages manners and makes it easier to trust what the tools do, which could make them easier to integrate into daily life. But trust comes with risk, from overreliance on them for decision-making to, at the extreme end, emotional dependence on AI and fewer human interactions.
The researchers plan to look deeper into the specific factors making people think that AI has consciousness and what that means on an individual and societal level. It will also include long-term looks at how those attitudes change over time and with regard to cultural background. Understanding public perceptions of AI consciousness is crucial not only to developing AI products but also to the regulations and rules governing their use.
“Alongside emotions, consciousness is related to intellectual abilities that are essential for moral responsibility: the capacity to formulate plans, act intentionally, and have self-control are tenets of our ethical and legal systems,” Colombatto said. “These public attitudes should thus be a key consideration in designing and regulating AI for safe use, alongside expert consensus.”
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You Might Also Like
When you interact with ChatGPT and other conversational generative AI tools, they process your input through algorithms to compose a response that can feel like it came from a fellow sentient being despite the reality of how large language models (LLMs) function. Two-thirds of those surveyed for a study by…
Recent Posts
- I tried this new online AI agent, and I can’t believe how good Convergence AI’s Proxy 1.0 is at completing multiple online tasks simultaneously
- I cannot describe how strange Elon Musk’s CPAC appearance was
- Over a million clinical records exposed in data breach
- Rabbit AI’s new tool can control your Android phone, but I’m not sure how I feel about letting it control my smartphone
- Rabbit AI’s new tool can control your Android phones, but I’m not sure how I feel about letting it control my smartphone
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010