Monday, November 4, 2024

Google isn’t letting Gemini answer election questions because AI ‘can make mistakes’

Must read

US election season is here again, and it’s looking like another close one: as of writing, Vice President Kamala Harris and former president Donald Trump are more or less neck and neck in most polls. As political campaigns and the discourse around them unfold increasingly in online spaces, service providers like Google have an obligation to provide timely info while also doing their best to limit the spread of factually inaccurate misinformation.




Google published a blog post today that outlines its efforts to that end across various platforms like Search and YouTube. The most interesting tidbit might be that the company is intentionally restricting its generative AI products like Gemini from providing information about elections or politics — explicitly because it knows that AI is often wrong.

Google actually laid out its plans to restrict its gen-AI products from dispensing information about elections last year, while Gemini was still called Bard. In December, a blog post spelled out that “out of an abundance of caution on such an important topic,” Google would be limiting the types of election questions its AI products can answer. Indeed, if you ask Gemini questions about US elections today, it’ll refuse to answer. Even for topics as cut and dry as which candidates are in a given race, Gemini will point you to Google Search. The company also says it’s working to expand its embedded watermarking for AI-generated content, SynthID, to more of its gen-AI tools.



It’s not all about AI

Did you know Google has products other than Gemini?

When it comes to Search, Google highlights an upcoming feature that’ll help US users get accurate voter registration information from “aggregated resources and information from state election offices.” On YouTube, searches for federal politcal candidates will include an information panel with some details about that candidate, including their political affiliation and links to both the candidate’s YouTube channel and a Google search for their name. On the Play Store, Google says, apps uploaded by government agencies will be clearly marked with a new badge.


It’s good that Google is putting guardrails on its AI products to help stop them from repackaging misinformation or memes as facts. The internet’s chock-full of sketchy info already, and as Google itself highlights, generative AI “can make mistakes as it learns or as news breaks.” It’s also liable to misunderstand tone and context to the tune of telling people to eat rocks and glue because that’s what it saw on Reddit.

At the same time, it’s troubling that a company with as much influence as Google is betting so big on products that, by its own admission, don’t work well enough to be trusted when it matters. AI is the marketing focal point for Google’s best phones yet, and the Google One tier that provides access to Gemini Advanced costs a whopping $20 per month, but Google admittedly knows that AI’s proclivity for being confidently wrong could cause serious harm in certain circumstances. Maybe AI will be ready for prime time by 2028.


A render of the Google Pixel 9 in Wintergreen against a white background.

Google Pixel 9

The Pixel 9 is Google’s most affordable 2024 flagship, making a few compromises when compared to the Pixel 9 Pro and Pro XL while retaining the Google smarts the lineup has become known for. An upgraded 48MP ultra-wide camera is paired with a 50MP main shooter, and the selfie cam added autofocus. All of this comes with new Gemini AI features and a 2,700-nit Actua display for exceptional value at its price point.

Latest article