Tom Currymax 1000x1000 1
Welcome to the first Cloud CISO Perspectives for January 2026. Today, Tom Curry and Anton Chuvakin, from Google Cloud’s Office of the CISO, share our new report on using Google’s Secure AI Framework with Google Cloud capabilities and services to build boldly and responsibly with AI.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
By Tom Curry, senior security consultant, and Anton Chuvakin, security advisor, Office of the CISO
Tom Curry, senior security consultant, Office of the CISO
Ensuring that AI can be used safely and securely to achieve bold and responsible business goals is a critical matter for today’s organizations.
At Google, we know that security and data privacy are the top concern for executives when evaluating AI providers, and security is the top use case for AI agents in a majority of the industries surveyed, according to our recent report on the Return on Investment of AI in security. To help security and business leaders secure AI and mitigate AI risk, we developed the Secure AI Framework in 2023. As AI has evolved, so has SAIF, and we now offer practical guidance on how to implement technology-agnostic SAIF controls on Google Cloud.
Google’s Secure AI Framework (SAIF) is a framework for securing AI systems throughout their lifecycles. While SAIF is designed for security professionals, developers, and data scientists on the frontlines to ensure AI models and applications are secure by design, security and business leaders — and especially CISOs — play a crucial role in helping their organizations incorporate SAIF as part of their secure-by-design strategy.
Anton Chuvakin, security advisor, Office of the CISO
The SAIF Risk Map details a conceptual system architecture for AI based on four components: Data, infrastructure, model, and application. SAIF also identifies 15 common AI risks, highlights where these risks occur, and maps each one against AI controls — including guidance on agentic AI risks and controls.
We strongly believe that securing AI should be an industry-wide, community effort, and as part of that commitment we’ve contributed SAIF components to the Coalition for Secure AI (CoSAI). As we explain in our newest paper on how to implement SAIF controls in Google Cloud, there are three key approaches that can help successfully apply SAIF to AI development: Data should be treated as the new perimeter, prompts should be treated as code, and secure agentic AI requires identity propagation.
The model is only as secure as the data that feeds it. We recommend organizations shift focus from protecting the model to sanitizing the supply chain.
1. Data is the new perimeter
The model is only as secure as the data that feeds it. We recommend organizations shift focus from protecting the model to sanitizing the supply chain. This is where automated discovery and differential privacy can be used to ensure personally identifiable information never becomes part of the model’s memory.
The SAIF Data Controls address risks in data sourcing, management, and use for model training and user interaction, ensuring privacy, integrity, and authorized use throughout the AI lifecycle. Key data security tools on Google Cloud include: Identity and Access Management, access controls for Cloud Storage and BigQuery, Dataplex for data governance, and Vertex AI managed datasets.
2. Treat prompts like code
Once data and infrastructure have been secured, we want to secure the model itself. Models can be attacked directly through malicious inputs (prompts), and their outputs can be manipulated to cause harm. In terms of ease-of-use for threat actors, prompt injection is the new SQL injection.
The SAIF Model Controls are designed to build resilience into the model and sanitize its inputs and outputs to protect against these emerging threats. We recommend that you deploy a dedicated AI firewall (such as Model Armor) to inspect every input for malicious intent and every output for sensitive data leaks before they reach the user. Additional key tools from Google Cloud include: Using Gemini as a guard model and using Apigee as a sophisticated API gateway.
3. Agentic AI requires identity propagation
Moving from chatbots to autonomous or semi-autonomous agents increases the blast radius of a security compromise. To help mitigate the risks of rogue actions and sensitive data disclosure, we strongly advise against using service accounts that have broad access: Any actions taken by AI agents on a user’s behalf should be properly controlled and permissioned, and agents should be instructed to propagate the actual user’s identity and permissions to every backend tool they touch.
SAIF recommends application controls to secure the interface between the end user and the AI model. As described in Google’s Agent Development Kit safety and security guidelines, AI agent developers should carefully consider whether interactions with backend tools should be authorized with the agent’s own identity or with the identity of the controlling user. As we explain in the new SAIF report, it takes several steps to implement user authorization for agents: Front-end authentication, identity propagation, authorization for model context protocol (MCP) and agent-to-agent (A2A) protocol, and IAM for Google Cloud services.
Bold and responsible: Building with SAIF
The Secure AI Framework provides a roadmap for navigating the complex security landscape of artificial intelligence. These three key approaches are crucial to SAIF, but there’s more to the framework. Governance controls, assurance controls (including red teaming and vulnerability management,) and application controls are critical SAIF components — and a key part of our alignment with Google Cloud global-scale security principles and capabilities.
For more information on how your organization can operationalize SAIF, you can read the full report here.
Here are the latest updates, products, services, and resources from our security teams so far this month:
Please visit the Google Cloud blog for more security stories published this month.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in a few weeks with more security-related updates from Google Cloud.
Suppose you’ve built your machine learning model, run the experiments, and stared at the results…
Recurrent Neural Networks (RNNs) laid the foundation for sequence modeling, but their intrinsic sequential nature…
Our work with large enterprise customers and Amazon teams has revealed that high stakes use…
Alfred Wahlforss was running out of options. His startup, Listen Labs, needed to hire over…
Leaders at Mira Murati’s startup believe Barret Zoph engaged in an incident of “serious misconduct.”…
OpenAI announced Friday it will begin testing advertisements on ChatGPT in the coming weeks, as…