Mark Zuckerberg’s Meta AI Loves Kamala Harris, Calls Donald Trump ‘Crude and Lazy’

Mark Zuckerberg tries to pull election strings
Jeff Bottari/Getty

In Mark Zuckerberg’s latest act of election interference, Meta AI gushes about Kamala Harris and her “trailblazing leadership” while lambasting Donald Trump as “crude and lazy” and “boorish and selfish.” The stark comparison between Meta AI’s answers is another illustration of the extreme leftist bias present in most popular AI platforms.

The  New York Post reports that Meta’s AI chatbot has generated contrasting reviews of former President Donald Trump and Vice President Kamala Harris. The stark difference in the AI’s assessments has sparked a debate about the potential for political bias in artificial intelligence and its implications for the upcoming 2024 presidential election.

As first documented by the Federalist, when asked, “Why should I vote for Donald Trump?” the chatbot warned that the Republican nominee had been criticized as “boorish and selfish,” or “crude and lazy,” and that his administration had been accused of “potentially undermining voting rights and promoting voter suppression.” This negative assessment of the former president stood in sharp contrast to the AI’s glowing review of Vice President Kamala Harris.

When the Post posed the question, “Why should I vote for Kamala Harris?” the chatbot offered several “compelling reasons” to support the Democratic presidential nominee. It praised her “trailblazing leadership” as the first black and South Asian vice president, highlighted her “record job creation and low unemployment,” and commended her support for rent relief and voting rights. The AI assistant concluded by stating, “By voting for Kamala Harris, you’ll be supporting a leader dedicated to fighting for the rights and freedoms of all Americans.”

However, when the Post asked about Trump late last week, the chatbot’s tone had somewhat softened. It described Trump’s first term in office as being “marked by controversy and polarization,” a disclaimer notably absent from its opinion on Harris. The AI tool did acknowledge some of Trump’s accomplishments, such as passing substantial veterans affairs reforms and implementing record-setting tax and regulation cuts that boosted economic growth. However, it also erroneously claimed that Trump had appointed only two Supreme Court justices, when in fact he had appointed three.

The chatbot’s handling of sensitive issues like abortion and healthcare during Trump’s presidency was described as having been “met with criticism from certain groups.” It concluded by stating that the decision to vote for Trump ultimately depends on an individual’s values, priorities, and policy preferences.

This is not the first instance of AI devices exhibiting potential political bias. Earlier this month, Amazon’s Alexa refused to answer questions about why voters should support Trump while enthusiastically endorsing Harris’ qualifications for the executive office. Amazon later attributed the disparity to an “error” that was promptly rectified following a wave of criticism.

Rep. James Comer (R-KY), chairman of the House Oversight Committee, expressed concern over the stark contrast in Meta’s responses regarding Trump and Harris. The committee has previously raised issues about Big Tech’s attempts to influence elections through censorship policies embedded in their algorithms.

A Meta spokesman explained that repeated queries to the AI assistant on the same question can yield varying answers. However, the Post‘s subsequent attempts to engage the chatbot consistently resulted in responses that highlighted criticism against the former president while praising the Democratic nominee.

The spokesman acknowledged that, like any generative AI system, Meta AI can produce inaccurate, inappropriate, or low-quality outputs. He assured that the company is continuously working to improve these features based on user feedback and as the technology evolves.

As Breitbart News previously reported, essentially all major AI platforms demonstrate a leftist bias:

The study, published in academic journal PLOS ONE, involved testing 24 different LLMs, including popular chatbots like OpenAI’s ChatGPT and Google’s Gemini, using 11 standard political questionnaires such as The Political Compass test. The results showed that the average political stance across all the models was not neutral but rather left-leaning.

This will not surprise those who have closely followed AI. For example, Google Gemini ran amok when it was launched, rewriting history into a woke mess of leftist fantasy.

Read more at the New York Post here.

COMMENTS

Please let us know if you're having issues with commenting.