Thursday, September 19, 2024

Google searches will now detect origin of AI-manipulated images

Must read

Google is expanding efforts to properly label AI-generated content, updating its in-house “About This Image” tool with a global standard for detecting the origins of an AI-edited image.

The new label was formulated as part of Google’s work with the global Coalition for Content Provenance and Authenticity (C2PA). Members of the C2PA have committed to developing and adopting a standardized AI certification and detection process, enabled by a verification technology known as “Content Credentials.” Not all C2PA members, which include Amazon, Meta, and OpenAI, have implemented the authentication standards, however.

Google is taking the first step among key players, integrating the C2PA’s new 2.1 standard into products like Google Search and eventually Google Ads (the “About This Image” prompt is found by clicking on the three vertical dots located above a photo uncovered in a search). This standard includes an official “Trust List” of devices and technology that can help vet the origin of a photo or video through its metadata. “For example, if the data shows an image was taken by a specific camera model, the trust list helps validate that this piece of information is accurate,” Laurie Richardson, Google vice president of trust and safety, told the Verge. “Our goal is to ramp this up over time and use C2PA signals to inform how we enforce key policies.”

Mashable Light Speed

After joining the C2PA in May, TikTok became the first video platform to implement the C2PA’s Content Credentials, including an automatic labeling system to read a video’s metadata and flag as AI. With the launch of Content Credentials for Google platforms, YouTube is set to follow in its footsteps.

Google has been vocal about widespread AI labeling and regulation, especially in its efforts to curb the spread of misinformation. In 2023, Google launched SynthID, its own digital watermarking tool designed to help detect and track AI-generated content made using Google DeepMind’s text-to-image generator, Imagen. It introduced (limited) AI labeling mandates for YouTube videos earlier this year, and has committed to addressing AI-generated deepfake content in Google Search.

The company joined the C2PA steering committee in February, a group that includes other major industry players and even news organizations, like the BBC.

Latest article