Wednesday, February 26, 2025

Google Starts Scanning Your Photos—Without Any Warning

Must read

You may recall Apple’s un-Apple-like moment a few weeks ago, when users discovered their photos were being scanned by Apple Intelligence to match landmarks. Users had not been told and it caused a furor with security experts. Google is now going through something of the same. And again, it’s not the technology, it’s the secrecy.

Apple’s Enhanced Visual Search, sends parts of photos to the cloud to match against a global index of points of interest. It’s very privacy-preserving, but as crypto expert Matthew Green complained, “it’s very frustrating when you learn about a service two days before New Years and you find that it’s already been enabled on your phone.”

ForbesMicrosoft Update Opens Copilot If You Ask For Google Gemini

Google’s awkward moment relates to its SafetyCore, an Android system update that enables on-device image scanning that could do all kinds of things, but is currently focused on blurring or flagging sensitive content. It’s seemingly even more private than Apple’s Enhanced Visual Search, given that it’s all on-device. So we’re told.

But when a technology is installed and enabled on our phones without warning, the after-the-fact assurances that it’s all fine tend to be met with more skepticism than would be the case if it was done more openly. That’s the same issue as Apple’s.

I have covered SafetyCore before, pointing out that its use to secure Google Messages would be a welcome addition to Gmail, shifting security scanning from Google’s servers to a user’s phone. But that doesn’t change the lack of openness point.

GrapheneOS — an Android security developer — provides some comfort, that SafetyCore “doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.”

But GrapheneOS also points out that “it’s unfortunate that it’s not open source and released as part of the Android Open Source Project and the models also aren’t open let alone open source… We’d have no problem with having local neural network features for users, but they’d have to be open source.” Which gets to transparency again.

Google says that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users control SafetyCore, and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature.”

And once users know it’s there, that’s all true.

Per ZDNet, the issue is that the “Google never told users this service was being installed on their phones. If you have a new Android device or one with software updated since October, you almost certainly have SafetyCore on your phone.” As with Apple, “one of SafetyCore’s most controversial aspects is that it installs silently on devices running Android 9 and later without explicit user consent. This step has raised concerns among users regarding privacy and control over their devices.”

ForbesSpy Agency Tells iPhone Users—Turn On Apple Security Feature

If you “don’t trust Google,” because as ZDNet points out, “just because SafetyCore doesn’t phone home doesn’t mean it can’t call on another Google service to tell Google’s servers that you’ve been sending or taking ‘sensitive’ pictures,” then you can stop it. You can find the option to uninstall or disable the service by tapping on ‘SafetyCore’ under ‘System Apps’ in the main ‘Apps’ settings menu on your phone.

Lessons learnt for both Apple and Google in recent weeks then. if you want to turn our phones into AI-fueled machines, then let us know what you’re doing before you do it, and give us the opportunity to say yes or no. Otherwise it fuels fear of the unknown.

Latest article