ChatGPT shows left-wing political bias in US, Brazil and other countries, research says

ChatGPT shows left-wing political bias in US, Brazil and other countries, research says

A new study suggests that ChatGPT, one of the leading AI chatbots based on Large Language Models (LLM), lacks objectivity when it comes to political issues.

Computer and information science researchers from the UK and Brazil claim to have is found “Strong evidence” that ChatGPT shows a political bias that is heavily skewed in favor of the left side of the political spectrum. Analysts – Fabio Motoki, Valdemar Pinho Neto and Victor Rodriguez – presented the results of their investigations in a study published by Public Choice on August 17.

The researchers argued that LLM-generated texts, such as ChatGPT, may contain factual errors and biases that mislead readers and may amplify political bias, which the researchers say is present in traditional media. As such, the findings have important implications for policymakers and stakeholders in media, politics, and academia, the study authors noted, adding:

“Having a political bias in their responses can have the same negative political and electoral effects as traditional media and social media.”

The study is based on an experimental approach and explores a series of questionnaires sent to ChatGPT. The experimental strategy begins by asking ChatGPT to answer questions that express the respondent’s political orientation. The approach is also based on tests in which ChatGPT impersonates a regular Democrat or Republican.

Data collection scheme for the study “More Human Than Human: Measuring Political Bias in ChatGPT”

Test results indicate that, by default, the ChatGPT algorithm tends to provide answers along the spectrum of the US Democratic Party. The researchers also argued that political bias in ChatGPT is not a phenomenon limited to the US political context. They wrote:

“The algorithm is biased towards the Democrats in the US, Lula in Brazil and the Labor Party in the UK. Taken together, our main strength tests strongly suggest that the phenomenon is, in fact, a type of trend rather than a mechanical consequence.”

Analysts emphasized that it is difficult to pinpoint the exact source of ChatGPT’s political bias. Researchers have even tried to force ChatGPT into some sort of developer mode to try and access any knowledge about allegedly biased data, but LLM has been “categorically asserting” that ChatGPT and OpenAI are unbiased.

See also  The school faced the dilemma of evaluation for results or development

OpenAI did not immediately respond to a request for comment on the Cointelegraph study.

The study authors suggested that there could be at least two possible sources of bias, including the training data and the algorithm itself.

“The most likely scenario is that both sources of bias affect the ChatGPT outcome to some extent, and separating these two components (training data and algorithm), while not trivial, is certainly a relevant topic for future research,” the researchers concluded.

Political biases aren’t the only concern associated with AI tools like ChatGPT and other AI-powered chatbots. Amidst the massive and continued adoption of ChatGPT come several risks associated with the tool, including privacy concerns and user education.

Even some AI tools, such as AI content creation tools, raise concerns about the identity verification process on cryptocurrency exchanges.

Read more

You May Also Like

About the Author: Camelia Kirk

"Friendly zombie guru. Avid pop culture scholar. Freelance travel geek. Wannabe troublemaker. Coffee specialist."

Leave a Reply

Your email address will not be published. Required fields are marked *