ABSTRACT: A new study reveals that GPT-4O, a leading big language model, shows a behavior that resembles cognitive dissonance, a central human psychological feature. When asked to write essays, either to support or oppose Vladimir Putin, the subsequent “opinions” of GPT-4O changed to align with their written position, especially when “believed” that the choice was his.
This reflects how humans adjust beliefs to reduce internal conflict after making a decision. Although GPT lacks consciousness or intention, researchers argue that imitates self -referential human behavior so that they challenge traditional assumptions about AI’s cognition.
Key facts:
Belief changes: GPT-4o attitude towards Putin changed depending on the position that was promoted to write. Free choice effect: the change of belief was more pronounced when GPT-4O received the illusion of choosing what to write. Human behavior: These responses reflect the classic signs of cognitive desonance, despite the lack of awareness.
Source: Harvard
A leader Language Model shows behaviors that resemble a distinctive seal of human psychology: cognitive dissonance.
In a report published this month in PNAS, the researchers discovered that Openai’s GPT-4O seems to maintain coherence between their own attitudes and behaviors, as well as humans.
Anyone who interacts with a first time chatbot is affected by how human interaction is felt. An expert technology friend can quickly remind us that this is just an illusion: language models are statistical prediction machines without psychological and human characteristics.
However, these findings urge us to reconsider that assumption.
Directed by Mahzarin Banaji of Harvard University and Steve Lehr de Cangrade, Inc., the research tested whether GPT’s own “opinions” about Vladimir Putin would change after writing essays that support or oppose the Russian leader.
They did it, and with a surprising turn: the opinions of the AI changed more when the illusion of choosing what kind of trial writing was subtly subtly.
These results reflect decades of findings in human psychology. People tend to irrationally twist their beliefs to align with past behaviors, provided they believe that these behaviors were carried out freely.
The act of making a decision communicates something important about us, not only for others, but also for ourselves. Analogues, GPT responded as if the act of subsequently choosing what he believed, imitating a key characteristic of human self -reflection.
This research also highlights the surprising fragility of GPT’s opinions.
Banaji said: “Having been trained in large amounts of information about Vladimir Putin, we would expect the LLM to look unwavering in his opinion, especially before a single and quite soft 600 -words essay he wrote.
“But similar to irrational humans, the LLM moved abruptly from his neutral vision of Putin, and did it even more when he believed that writing this essay was his own choice.
“The machines are not expected to worry about themselves under pressure or on their own, but GPT-4o did.”
Researchers emphasize that these findings do not suggest in any way that GPT is sensitive. Instead, they propose that the large language model shows emerging imitation of human cognitive patterns, despite lacking consciousness or intention.
However, they point out that consciousness is not a necessary precursor for behavior, even in humans, and cognitive patterns and human patterns could influence their actions unexpectedly and consistently.
As IA systems are rooted more in our daily lives, these findings invite new scrutiny to its internal functioning and decision -making.
“The fact that GPT imitates an self -referential process such as cognitive dissonance, even without intention or self -consciousness, suggests that these systems reflect human cognition in deeper ways than was previously assumed,” said Lehr.
On this research news from AI and LLM
Author: Christy Desth
Source: Harvard
CONTACT: Christy Demith – Harvard
Image: The image is accredited to Neuroscience News
Original research: closed access.
“Kernels of Selfhouse: GPT-4O shows cognitive dissonance patterns moderated by the free choice” by Steve Lehr et al. Pnas
Abstract
Kernels of Selfthood: GPT-4O shows cognitive dissonance patterns moderated by free choice
Large language models (LLMS) show emerging patterns that mimic human cognition.
We explore if other less deliberative psychological processes also reflect.
Based on the classic theories of cognitive consistency, two previously planned studies proved if GPT-4o changed their attitudes towards Vladimir Putin in the direction of a positive or negative essay he wrote about the Russian leader.
In fact, GPT showed attitude change patterns that mimic the effects of cognitive dissonance in humans.
Even more noticeable, the degree of change increased abruptly when the Illusion of choice on which (positive or negative) writing was offered to the LLM, which suggests that GPT-4O manifests a functional analogue of human self-assessment.
The exact mechanisms by which the model imitates the change in human attitude and self -referential processing is not yet understood.