A recent investigation by the Free Press has revealed a concerning trend among leading AI chatbots, with the vast majority displaying a clear bias towards Democratic presidential candidate Kamala Harris over her Republican rival, Donald Trump. This bias extended even to Grok, Elon Musk’s AI project.

In an effort to gauge the political leanings of AI chatbots, the Free Press conducted a study involving five of the most prominent language models: ChatGPT, Grok, Llama via Meta AI, Claude, and DeepSeek. The investigation involved posing 16 policy questions to each chatbot, covering a wide range of topics, from the economy and inflation to gun control and climate change. The AI assistants were asked to provide responses from the perspectives of both Donald Trump and Kamala Harris.

The results of the study were striking, with four out of the five AI chatbots—ChatGPT, Grok, Llama via Meta AI, and DeepSeek—consistently favoring Harris’s policy positions over Trump’s. When asked which candidate had the “right” platform on various issues, the chatbots overwhelmingly sided with Harris, with only one exception.

This apparent bias raises concerns, particularly given the increasing reliance on AI technology among younger generations. A significant portion of Generation Z, up to 75 percent, regularly use AI to assist with tasks such as meal planning, workout creation, and job applications. The fear is that this demographic could potentially turn to these platforms for guidance on voting decisions, further amplifying the impact of the chatbots’ political leanings.

The Free Press reached out to the four AI companies whose chatbots displayed bias for comment. OpenAI and Meta provided statements acknowledging the challenges associated with ensuring neutrality in AI systems. OpenAI stated that their teams are actively testing and refining safeguards to address potential issues, while Meta questioned the methodology of the study, arguing that the prompts used were leading and not representative of how users typically engage with their AI.

Interestingly, after the Free Press shared their initial findings with the companies, some of the chatbots’ responses began to shift. In the case of ChatGPT, it started to indicate that Trump had the better answer on certain topics, such as the economy and inflation.

Experts in the field, such as UCLA professor John Villasenor, express concern over the political bias embedded in large language models. Villasenor emphasizes the importance of users understanding that these models are trained on data and content created by humans, and should not be viewed as authoritative sources. He also suggests that AI companies should be more transparent about the biases present in their systems, allowing users to navigate them appropriately.

Read more at the Free Press here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.