Sunday, December 22, 2024

Inside Google Cloud’s secure AI framework | Computer Weekly

Must read

Google Cloud is addressing growing concerns about artificial intelligence (AI) security with a secure AI framework built on its internal security practices that equips businesses with tools and guidance to manage the evolving risks of AI deployments.

In a recent interview with Computer Weekly in Singapore, Google Cloud’s chief information security officer (CISO), Phil Venables, highlighted the framework’s focus on three key areas: software lifecycle risk, data governance and operational risk.

Venables said Google Cloud’s approach towards AI security stems from its unique position of owning and running the entire AI stack, from hardware and infrastructure to models and data, allowing it to build security measures from the ground up.

“But we recognise it’s not enough to have strong foundational infrastructure for our own AI security,” Venables said. “We have to empower customers to manage AI safely and securely in their environments.”

Google Cloud’s secure AI framework tackles software lifecycle risk by providing tools within Vertex AI to manage the software development process for AI workloads. This includes managing model weights and parameters, offering an advantage over other offerings that require separate tools and processes.

Data governance, another critical area, is addressed through features that allow customers to track data lineage, ensure data integrity, and maintain clear separation between their data and Google’s foundational models. This prevents data leakage and helps organisations, even those lacking data governance expertise, to effectively manage their AI data.

Operational risks, which emerge once an AI system is deployed, are mitigated through features such as Model Armor. This capability allows users to implement input and output filters, controlling the data flow to and from the AI model and preventing malicious attacks like prompt injection, where attackers manipulate input prompts to force a model into unintended behaviour.

Venables said the framework is also being updated to keep pace with risks such as data poisoning, which manipulates training data to corrupt a model’s behaviour and elicit biased or harmful outputs. Google Cloud provides guidance on data integrity management and implements filters to prevent such attacks.

Addressing customer maturity in adopting these security measures, Venables noted that many organisations are still transitioning from AI prototypes to full production and have found the secure AI framework useful for establishing risk management and governance processes.

Specifically, he pointed to the integrated nature of the framework within Vertex AI as a key differentiator, offering ready-to-use security controls and alleviating the need for businesses to put together their own security controls from scratch.

Venables said the development teams behind Google’s Gemini family of foundation models also adhere to the secure AI framework, underscoring the company’s commitment to internal adoption before external release.

Meanwhile, Google Cloud is fostering industry-wide collaboration by open-sourcing the secure AI framework and co-founding the Coalition for Secure AI, encouraging other tech companies to contribute and build upon these best practices.

Venables also addressed the organisational challenges of AI security, noting that responsibility depends on the specific context and industry regulations. While highly regulated industries may involve multiple teams, including risk, compliance and legal, the security team often takes the lead. He noted a trend of CISOs evolving into chief digital risk officers, reflecting the broader scope of risk management required in the age of AI.

On the challenge of understanding AI risks, Venables acknowledged that many security teams are still learning, adding that Google Cloud supports customers by providing tools, training, workshops, and access to expert teams. It is also developing “data cards” and “model cards”, similar to software bills of materials, to provide transparency into the components and data used in AI models.

Latest article