Wednesday, October 23, 2024

To keep deepfakes from infiltrating its site, Yahoo News enlists help from McAfee

Must read

The 2024 U.S. presidential campaign has featured some notable deepfakes — AI-powered impersonations of candidates that sought to mislead voters or demean the candidates being targeted. Thanks to Elon Musk’s retweet, one of those deepfakes has been viewed more than 143 million times.

The prospect of unscrupulous campaigns or foreign adversaries using artificial intelligence to influence voters has alarmed researchers and officials around the country, who say AI-generated and -manipulated media are already spreading fast online. For example, researchers at Clemson University found an influence campaign on the social platform X that’s using AI to generate comments from more than 680 bot-powered accounts supporting former President Trump and other Republican candidates; the network has posted more than 130,000 comments since March.

To boost its defenses against manipulated images, Yahoo News — one of the most popular online news sites, attracting more than 190 million visits per month, according to Similarweb.com — announced Wednesday that it is integrating deepfake image detection technology from cybersecurity company McAfee. The technology will review the images submitted by Yahoo news contributors and flag the ones that were probably generated or doctored by AI, helping the site’s editorial standards team decide whether to publish them.

Matt Sanchez, president and general manager of Yahoo Home Ecosystem, said the company is just trying to stay a step ahead of the tricksters.

“While deepfake images are not an issue on Yahoo News today, this tool from McAfee helps us to be proactive as we’re always working to ensure a quality experience,” Sanchez said in an email. “This partnership boosts our existing efforts, giving us greater accuracy, speed, and scale.”

Sanchez said outlets across the news industry are thinking about the threat of deepfakes — “not because it is a rampant problem today, but because the possibility for misuse is on the horizon.”

Thanks to easy-to-use AI tools, however, deepfakes have proliferated to the point that 40% of the high schoolers polled in August said they had heard about some kind of deepfake imagery being shared at their school. The online database of political deepfakes being compiled by three Purdue University academics includes almost 700 entries, more than 275 of them from this year alone.

Steve Grobman, McAfee’s chief technology officer and executive vice president, said the partnership with Yahoo News grew out of the McAfee’s work on products to help consumers detect deepfakes on their computers. The company realized that the tech it developed to flag potential AI-generated images could be useful to a news site, especially one like Yahoo that combines its own journalists’ work with content from other sources.

McAfee’s technology adds to the “rich set of capabilities” Yahoo already had to check the integrity of the material coming from its sources, Grobman said. The deepfake detection tool, which is itself powered by AI, examines images for the sorts of artifacts that AI-powered tools leave among the millions of data points within a digital picture.

“One of the really neat things about AI is, you don’t need to tell the model what to look for. The model figures out what to look for,” Grobman said.

“The quality of the fakes is growing rapidly, and part of our partnership is just trying to get in front of it,” he said. That means monitoring the state of the art in image generation and using new examples to improve McAfee’s detection technology.

Nicos Vekiarides, chief executive of the fraud-prevention company Attestiv, said it’s an arms race between companies like his and the ones making AI-powered image generators. “They’re getting better. The anomalies are getting smaller,” Vekiarides said. And although there is increasing support among major industry players for inserting watermarks in AI-generated material, the bad actors won’t play by those rules, he said.

In his view, deepfake political ads and other bogus material broadcast to a wide audience won’t have much effect because “they get debunked fairly quickly.” What’s more likely to be harmful, he said, are the deepfakes pushed by influencers to their followers or passed from individual to individual.

Daniel Kang, an assistant professor of computer science at the University of Illinois Urbana-Champaign and an expert in deepfake detection, warned that no AI detection tools today are good enough to catch a highly motivated and well-resourced attacker, such as a state-sponsored deepfake creator. Because there are so many ways to manipulate an image, an attacker “can tune more knobs than there are stars in the universe to try to bypass the detection mechanisms,” he said.

But many deepfakes aren’t coming from highly sophisticated attackers, which is why Kang said he’s bullish on the current technologies for detecting AI-generated media even if they can’t identify everything. Adding AI-powered tools to sites now enables the tools to learn and get better over time, just as spam filters do, Kang said.

They’re not a silver bullet, he said; they need to be combined with other safeguards against manipulated content. Still, Kang said, “I think there’s good technology that we can use, and it will get better over time.”

Vekiarides said the public has set itself up for the wave of deepfakes by accepting the widespread use of image manipulation tools, such as the photo editors that virtually airbrush the imperfections from magazine-cover photos. It’s not so great a leap from a fake background in a Zoom call to a deepfaked image of the person you’re meeting with online, he said.

“We’ve let the cat out of the bag,” Vekiarides said, “and it’s hard to put it back in.”

Latest article