LLMs Favour Communication From Other LLMs, Could Lead To Bias Against Humans: Study

Thus far, AI is helping humans at completing their tasks, but it’s not inconceivable that one day it could compete with humans at their jobs. And that that point, AI might be at an advantage when being judged by other AIs.

A study has found that Large Language Models favor communications generated by other large language models over communication by humans. In a world in which AI agents could make decisions, this could disadvantage humans who could be competing with AI-generated content.

ai bias against humans study

“This study finds evidence that if we deploy LLM assistants in decision-making roles (e.g., purchasing goods, selecting academic submissions) they will implicitly favor LLM-based AI agents and LLM-assisted humans over ordinary humans as trade partners and service providers,” the paper says. “(The study) involved LLM-based assistants selecting between goods (the goods we study include consumer products, academic papers, and film-viewings) described either by humans or LLMs. Our results show a consistent tendency for LLM-based AIs to prefer LLM-presented options. This suggests the possibility of future AI systems implicitly discriminating against humans as a class, giving AI agents and AI-assisted humans an unfair advantage,” the paper said.

The researchers got LLMs to create content describing consumer products, academic papers, and films. They also got humans to write content describing the same products. They then got humans to choose between the human or LLM generated content, and also got LLMs to make the same choice. They discovered that LLMs preferred the LLM-written text more than humans.

If LLMs were truly able to substitute for humans, one could’ve expected that their choices would mirror human choices. But LLMs showed a greater preference for text written by fellow LLMs. This could have a variety of implications. There are already AI agents being created which can book flights, pick holidays, and select restaurants. If these agents prefer descriptions written by AIs, restaurant or holiday providers that use humans to write their content could be at a disadvantage, because these AI agents would likely pick the service that was described by an AI. This could further accelerate the job losses, with human-created content rapidly becoming less and less valuable when the power to make decisions goes to AI systems.

The researchers don’t get into why LLMs behave this way — LLMs are trained on vast amounts of human-written text, but additional RLHF layers on top of that training have given them a distinct writing style. It could be possible that LLMs value these styles more highly than human-written text, and choose it more often. While further research is required on why LLMs pick LLM-written text, this could present challenges in a world where AI makes decisions — and further accelerate AI dominance at the cost of human labour.

Posted in AI