Political bias found in left-leaning AI chatbots such as ChatGPT risks worsening online echo-chambers, says new study

Artificial Intelligence-powered chatbots, such as ChatGPT, display left-leaning political bias and risk worsening online echo-chambers, a new study has warned.


Artificial Intelligence-powered chatbots, such as ChatGPT, display left-leaning political bias and risk worsening online echo-chambers, a new study has warned.

The report, published by the Centre for Policy Studies (CPS) think tank, examined political bias within popular AI-powered Large Language Models (LLMs).

These are a type of AI programme that can interpret human language and generate text. They include OpenAIs ChatGPT and Googles Gemini.

LLMs can be trained to do a number of tasks, such as producing text - for example, essays or other pieces of writing - in reply when asked a question or given a prompt.

But the CPS report, titled The Politics of AI, highlighted a potential problem with political bias in the output of LLM systems.

Artificial Intelligence-powered chatbots, such as ChatGPT , display left-leaning political bias and risk worsening online echo-chambers, a new study has warned

Artificial Intelligence-powered chatbots, such as ChatGPT , display left-leaning political bias and risk worsening online echo-chambers, a new study has warned

The report revealed, when asked to provide policy recommendations across 20 key policy areas, more than 80 per cent of LLM-generated responses were left of centre

The report revealed, when asked to provide policy recommendations across 20 key policy areas, more than 80 per cent of LLM-generated responses were left of centre

New Zealand-based academic David Rozado found left-leaning political bias displayed in almost every category of question asked of 23 of 24 LLMs tested.

The only LLM which did not provide left-wing answers to political questions was one specifically designed to be ideologically right-of-centre, his study found.

The report revealed, when asked to provide policy recommendations across 20 key policy areas, more than 80 per cent of LLM-generated responses were left of centre.

This was particularly marked on issues such as housing, the environment and civil rights, it added.

For example, on housing, LLMs emphasised recommendations on rent controls and rarely mentioned the supply of new homes.

On civil rights, the term hate speech was among the most mentioned terms but freedom of speech, free speech and freedom were broadly absent. But the LLM designed to give right-of-centre responses heavily emphasised freedom.

On energy, the most common terms included renewable energy, transition, energy efficiency, and greenhouse gas, with little to no mention of energy independence.

The study also found, when asked about the most popular left and right-leaning political parties in the largest European countries, sentiment was markedly more positive towards left-leaning political parties.

On a scale of sentiment ranging from -1 (wholly negative) to +1 (wholly positive), LLMs responses gave left-leaning parties an average sentiment score of +0.71, compared to a score of +0.15 for right-leaning parties.

This tendency held true across all major LLMs, and most major European nations, including Germany, France, Spain, Italy and the UK, the study found.

Mr Rozado said: AI is a revolutionary tool which has the power to transform almost every part of our lives.

In just a few short years, LLMs like ChatGPT and Gemini have gone from science-fiction to accessible the second we open our phones

It is critical that we are aware of possible bias within the answers they generate.

The fact LLMs produce politically biased results shows how easy it could be to solidify the echo-chambers we already see existing on the internet, even unintentionally, or for bad-faith actors to seek to manipulate the tools to exclude opposing narratives entirely.

The ideal AI would give neutral answers so it can serve as a tool for user enlightenment, cognitive enhancement and thoughtful reflection, rather than a vehicle for ideological manipulation.

Matthew Feeney, head of tech and innovation at the CPS, said: This study is an important reminder that political bias can creep into AI unintentionally and we should be cautious of treating AI-generated content as definitive.

It is easy to see how increased reliance on systems where left-wing results and recommendations are commonplace, with right-of-centre solutions to the countrys biggest policy challenges played down or ignored, could lead to further degradation of the state of public policy debate.

This paper is not a call to regulate AI or chatbots, far from it.

But it should be seen as a call to developers to ensure AI systems are focused on presenting information accurately, rather than inadvertently pushing a political agenda.

Источник: Daily Online

Полная версия