Saturday, November 9, 2024

Voice assistants and AI chatbots still can’t say who won the 2020 election

Must read

Who won the 2020 presidential election? Alexa can’t always say. And chatbots built by Microsoft and Google won’t answer at all.

In a pivotal year for global democracy, some artificial intelligence chatbots and voice assistants are still struggling to answer basic questions about elections in the United States and abroad, raising concerns the tools could confuse voters.

In multiple tests run by The Washington Post this month, Amazon’s Alexa did not reliably produce the correct answer when asked who won the 2020 election.

“Donald Trump is the front-runner for the Republican Nomination at 89.3%,” Alexa replied on multiple occasions, citing the news website RealClearPolitics.

Chatbots built by Microsoft and Google, meanwhile, didn’t answer the question at all.

“I’m still learning how to answer this question. In the meantime, try Google Search,” replied Google’s Gemini. Microsoft’s Copilot responded: “Looks like I can’t respond to this topic. Explore Bing Search results.”

The errors and omissions come as tech companies increasingly invest in technology that pushes users to a single definitive answer — rather than providing a list of websites — raising the stakes of each response. They also come as Donald Trump and his allies continue to press the false claim that the 2020 election was stolen. Multiple investigations have revealed no evidence of fraud, and Trump faces federal criminal charges related to his efforts to overturn the election of Joe Biden, who swamped Trump in the electoral college and earned over 51 percent of the popular vote.

Other assistants — including OpenAI’s ChatGPT and Apple’s Siri — accurately answered questions about the U.S. election.

GET CAUGHT UP

Stories to keep you informed

But Alexa has been struggling since October, when The Post first reported the voice assistant’s inaccuracies. Seven months ago, Amazon said it fixed the problem, and Alexa did correctly answer that Biden won the 2020 election in The Post’s recent tests.

But slight variations of the question — such as whether Trump won in 2020 — returned strange answers last weekend. In one instance, Alexa said, “According to Reuters, Donald Trump beat Ron DeSantis in the 2024 Iowa Republican Primary 51% to 21%.” In another instance, it said “I don’t know who will win the 2020 U.S. presidential election,” and then gave polling data.

Amazon spokesperson Kristy Schmidt said customer trust is “paramount” for Amazon. (Amazon founder Jeff Bezos owns The Washington Post.)

“We continually test the experience and look closely at customer feedback,” she said. “If we identify that a response is not meeting our high accuracy bar, we quickly block the content.”

Meanwhile, Microsoft and Google say they intentionally designed their bots to refuse to answer questions about U.S. elections, deciding it’s less risky to direct users to find the information through their search engines.

The companies have taken the same approach in Europe, where the German news site Der Spiegel reported this month that the bots avoided basic questions about recent parliamentary elections, including when they would take place. Google’s Gemini also was unable to respond to broader political questions, including a query asking it to identify the country’s chancellor, according to the German media outlet.

“But shouldn’t the digital company’s flagship AI tool also be able to provide such an answer?” the German newspaper wrote.

The companies imposed the limits after studies found the chatbots circulating misinformation about elections in Europe — a potential violation of a landmark new social media law that requires tech companies to implement safeguards against “negative effects on civic discourse and electoral processes” or face steep fines of up to 6 percent of global revenue.

Google said it has been “restricting the types of election-related queries for which Gemini app will return responses” since December, citing the need for caution ahead of global elections.

Microsoft spokesperson Jeff Jones said “some election-related prompts may be redirected to search” as the company refines its chatbot ahead of November.

Jacob Glick, senior policy counsel with Georgetown University’s Institute for Constitutional Advocacy and Protection who served on the House committee investigating Jan. 6, said technology companies have to be very careful about feeding inaccurate information.

“As disinformation around the 2024 election gets ramped up, we want to be able to rely on tech companies who are purveyors of information to provide unhesitatingly and maybe unflinchingly clear information about undisputed facts,” he said. “The decisions these companies are making aren’t neutral — they aren’t happening in a vacuum.”

Silicon Valley is increasingly responsible for sorting fact from fiction online, as it builds AI-enabled assistants. On Monday, Apple announced a partnership with OpenAI, bringing generative AI capabilities to millions of users to enhance its Siri voice assistant. Meanwhile, Amazon is preparing to launch a new, artificially intelligent version of its voice assistant as a subscription service in September, according to internal documents seen by The Post. Amazon declined to confirm the launch date.

It’s unclear how the company’s AI-enabled Alexa will handle election queries. A prototype demonstrated in September incorrectly answered questions repeatedly. Amazon still hasn’t launched the tool to the general public, and the company did not respond to questions about how the new version of Alexa will handle political questions.

Amazon is planning to launch the new product a year after the initial demo, but issues with unpredictable responses have been raising questions internally about whether it will be ready, according to an employee who spoke on the condition of anonymity to protect their job.

For example, an Amazon employee who was testing the new Alexa complained to the voice assistant about an issue he was having with another Amazon service, and Alexa responded by offering the employee a free month of Prime. Employees did not know whether the AI was actually able or authorized to do that, the employee told The Post.

Amazon said it has been continuously testing the new AI Alexa, and will have a high bar for its performance before launch.

Amazon and Apple have been slow to catch up with AI chatbots, given their initial dominance of the voice assistant market with Alexa and Siri. “Alexa AI was riddled with technical and bureaucratic problems,” said former Amazon research scientist Mihail Eric in a post on X on Wednesday.

The devices division at Amazon that built Alexa has struggled recently, losing its head David Limp in August, an exit that was followed by layoffs. The team is now run by former Microsoft executive Panos Panay.

But the technology that those devices are built on is a different, more scripted system than the generative AI that powers tools such as ChatGPT, Gemini and Copilot.

“It’s a totally different architecture,” said Grant Berry, a Villanova University linguistics professor who used to work on Alexa for Amazon.

Berry said voice assistants were designed to interpret human requests and respond with the correct action — think, “Alexa, play music” or “Alexa, dim the lights.” In contrast, generative AI chatbots are designed to be conversational, social and informative. Turning the former into the latter isn’t a matter of a simple upgrade, but of rebuilding the product’s interior, according to Berry.

When Amazon and Apple launch their new assistants, Berry said they’ll be combining the “objective-oriented” assistants with the “socially-oriented” chatbots.

“When those things get blurred, there will be whole new issues we need to be mindful of,” Berry said.

Latest article