Friday, November 22, 2024

Google adds new disclosures for AI photos, but it’s still not obvious at first glance | TechCrunch

Must read

Starting next week, the Google Photos app will add a new disclosure for when a photo has been edited with one of its AI features, such as Magic Editor, Magic Eraser, and Zoom Enhance. When you click into a photo in Google Photos, there will now be a disclosure when you scroll to the bottom of the “Details” section, noting when a photo was “Edited with Google AI.”

Google says it’s introducing this disclosure to “further improve transparency,” however, it’s still not that obvious when a photo is edited by AI. There still won’t be visual watermarks within the frame of a picture indicating that a photo is AI generated. If someone sees a photo edited by Google’s AI on social media, in a text message, or even while scrolling through their photos app, they won’t immediately see that the photo is synthetic.

Google’s new AI disclosure (Image Credit: Google)

Google announced the new disclosure for AI photos in a blog post on Thursday, a little over two months after Google unveiled its new Pixel 9 phones, which are jam-packed with these AI photo editing features. The disclosures seem to be a response to the backlash Google received for widely distributing these AI tools without any visual watermarks that are easily readable by humans.

As for Best Take and Add Me — Google’s other new photo-editing features that don’t use generative AI — Google Photos will now also indicate those photos have been edited in their metadata, but not under the Details tab. Those features edit multiple photos together to appear as one clean image.

These new tags don’t exactly solve the main issue people have with Google’s AI editing features: the lack of visual watermarks in the frame of a photo (at least ones you can see at a glance) may help people not feel deceived, but Google doesn’t have them.

Every photo edited by Google AI already discloses that it’s edited by AI in the photo’s metadata. Now, there’s also an easier-to-find disclosure under the Details tab on Google Photos. But the problem is that most people don’t look at the metadata or details tab for photos they see on the internet. They just look and scroll away, without much further investigation.

To be fair, visual watermarks in the frame of an AI photo are not a perfect solution either. People can easily crop or edit these watermarks out, and then we’re back to square one. We reached out to Google to ask if they’re doing anything to help people immediately identify whether a photo is edited by Google AI, but didn’t immediately hear back.

The proliferation of Google’s AI image tools could increase the amount of synthetic content people view on the internet, making it harder to discern what’s real and what’s fake. The approach Google has taken, using metadata watermarks, relies on platforms to indicate to users that they’re viewing AI generated content. Meta is already doing this on Facebook and Instagram, and Google says it plans to flag AI images in Search later this year. But other platforms have been slower to catch up.

Latest article