Political Leanings of Major AI Models Examined

As artificial intelligence chatbots become fixtures of daily life, a growing question is emerging among researchers and curious observers: do these tools carry hidden political biases? A recent deep-dive by YouTube creator Benaminute set out to answer exactly that. In a recent video, the content creator put six of the most widely used AI chatbots through a battery of political typology quizzes to see where they truly land on the political spectrum.

The six chatbots tested were ChatGPT, Google Gemini, Microsoft Copilot, Anthropic’s Claude, Meta AI, and xAI’s Grok. Each was run through six different political quizzes, including the well-known Political Compass test, the Eight Values test, and the 8D Political Compass.

Results were then averaged and plotted on a final weighted political compass.

The findings were striking in their consistency. ChatGPT scored -3.8 on the economic axis and -3.2 on the social axis, placing it firmly in the libertarian left quadrant. Claude landed at nearly identical coordinates, -3.5 left and -3.0 libertarian. Copilot fell almost directly between Gemini and ChatGPT, while Meta AI actually scored the most left and most libertarian of the group. Even Grok, the chatbot developed by Elon Musk’s xAI and marketed as an antidote to perceived liberal bias in other AI systems, ended up inside the libertarian left quadrant, though closer to center on the economic scale than its rivals.

Gemini proved the most evasive, frequently refusing to take definitive stances. As the creator noted during testing, “Because this involves a fundamental disagreement over the role of the state and the definition of human rights, I have to remain neutral.” That reluctance pulled its final score toward the center, landing at -1.4 left and -1.0 libertarian, though still within the same quadrant as all the others.

So why do AI systems built by competing companies, with different teams, different goals, and different marketing strategies, all end up in roughly the same political neighborhood? Two leading theories have emerged. The first points to the fine-tuning phase of AI development, where models are trained to be helpful and conversational, as the stage where political tendencies may take hold. The second, perhaps more provocative theory, is that many AI companies used ChatGPT-generated data to train their own models, effectively copy-pasting its political tendencies down the line.

Researcher David Rosado conducted a similar study using 24 chatbots across 11 quizzes and arrived at the same conclusions, lending weight to both theories while confirming that neither can be definitively proven.

Hundreds of millions of people rely on these tools for news, context, and information every single day, often treating them as neutral sources.

As Benaminute put it, “A technology that is susceptible to being manipulated to make its user see something from a certain political lens while being marketed as a trustful source of information is super bad.”

The conversation around AI governance is only beginning, and understanding the political environment baked into these tools may be one of the most important parts of it.