New research shows that AI chatbots like ChatGPT, Claude, and Gemini have the ability to influence public opinion, demonstrating that people largely accept biased information provided by AI – even if they were warned not to, writes UNN with reference to dpa.
Details
After 2,500 participants were asked to consult AI when writing articles on important social issues such as the death penalty and hydraulic fracturing for shale gas, it was found that they largely "converged towards the platform's position," according to the researchers' article published in the journal Science Advances.
A team of researchers from Cornell University and the University of Washington in the USA, as well as from Bauhaus University in Germany and Tel Aviv University in Israel, found that the influence of AI outweighs "similar suggestions presented as static text."
They also emphasized that informing participants about bias before or after completing tasks "does not mitigate the effect of attitude change."
Tests showed that people "gravitate" towards AI, regardless of whether the bot was configured for left-wing or right-wing, liberal or conservative views.
Researchers found that AI approaches to current events "have the power to change attitudes on various topics and different political preferences," after bots were configured for left-wing views on issues of capital punishment and genetically modified organisms, but right-wing on issues of shale gas fracking and voting rights for convicts.
As the researchers found, participants' views consistently changed depending on the AI's influence.
"We warned people before and after to be careful, that the AI would be (or was) biased, and nothing helped," said Mor Naaman from Cornell University.
The findings are consistent with an article from the University of Southern California, which suggests that AI can homogenize language and opinions to such an extent that users may see their ability to reason atrophy, the publication notes.
In a 2024 study, the London-based Centre for Policy Studies reported finding a "left-leaning political bias" in "almost all categories" in responses given by 23 out of 24 AI platforms to questions about public policy.