2022 S Joyce Headshotmax 1000x1000 1
Welcome to the first Cloud CISO Perspectives for July 2025. Today, Sandra Joyce, vice president, Google Threat Intelligence, talks about an incredible milestone with our Big Sleep AI agent, as well as other news from the intersection of security and AI.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
By Sandra Joyce, vice president, Google Threat Intelligence
Sandra Joyce, vice president, Google Threat Intelligence
Business leaders everywhere are scrambling to implement AI in a way that creates value while trying to define what that value means — at the same time. As we build on our efforts to shape AI and define AI workflows in cybersecurity, we are already really excited about using AI in the work that we do.
I spoke about some of that work at the RSA Conference in April, including how AI is reshaping cybersecurity and how Google uses data to drive our practical applications of AI in both attack and defense. We revealed Tuesday that our Big Sleep AI agent, first introduced in November 2024 by Google DeepMind and Project Zero, has taken a very significant step for defenders: We believe this is the first time an AI agent has been used to directly foil efforts to exploit a vulnerability in the wild.
Through the combination of threat intelligence from the Google Threat Intelligence Group (GTIG) and the Big Sleep AI agent, we were recently able to identify a critical SQLite vulnerability known only to threat actors that was imminently going to be used — and actually cut it off beforehand.
With Big Sleep, we’ve demonstrated how we can find vulnerabilities that defenders don’t yet know about. In this case, we found a vulnerability that the attackers knew about and had every intention of using, and we were able to detect and report it for patching before they could exploit it.
Developed by Google DeepMind and Google Project Zero, Big Sleep can help security researchers find zero-day (previously-unknown) software security vulnerabilities. Since it was introduced last year, it has continued to discover multiple flaws in widely-used software, exceeding our expectations and accelerating AI-powered vulnerability research.
With Big Sleep, we’ve demonstrated how we can find vulnerabilities that defenders don’t yet know about. In this case, we found a vulnerability that the attackers knew about and had every intention of using, and we were able to detect and report it for patching before they could exploit it.
Attackers have long had an advantage because they were taking shots at a massive goal with a lot of ground to defend, but the productivity gains from a defender’s point of view are astounding to us. If you had a human in place of Big Sleep, they would’ve had to pour over two different versions of open-source source code and manually see where the vulnerability was, all while knowing that attackers were planning on using this vulnerability soon.
Speed and accuracy made all the difference in this case, which gave us an edge over threat actors. Since defenders own and control these systems, AI has given us a very powerful development (and vulnerability remediation) advantage.
Big Sleep is also being deployed to help improve the security of other widely-used open-source projects, too — a major win for ensuring faster, more effective security across the internet more broadly.
Empowering defensive AI agents
While AI agents represent a sea change for cybersecurity, the work they do needs to be done safely and responsibly. We outlined our approach to building AI agents in June in ways that safeguard privacy, mitigate the risks of rogue actions, and ensure the agents operate with the benefit of human oversight and transparency. When deployed according to secure by design principles, agents can give defenders an edge like no other tool that came before them.
We will continue to share our agentic AI insights and report findings through our industry-standard disclosure process. You can keep tabs on all publicly-disclosed vulnerabilities from Big Sleep on our issue tracker page.
We’re seeing the impact of AI across security, from boosting threat hunting to stronger security validations to smarter red team analyses. Similarly, the speed and accuracy of AI comes to aid defenders when dealing with the ever-growing onslaught of email phishing attacks. Attackers have been using AI to improve a lot of the previous hints that a legit-looking email was actually a phishing attack, such as using colloquial language, proper slang, and tailoring the email to the recipient.
Yet if you train the AI model to look at what spearphishing emails look like, it can get better at detection, triage, and identifying phishing threats faster and at a scale that if a human has to jump in to review something, they have to review less now. Our AI-powered defenses help Gmail block all sorts of phishing, spam, and malware.
When it comes to attackers and their use of AI, we’re still in the “before times.” As I noted at RSAC, Google Threat Intelligence Group has seen AI used to flesh out code, we’ve seen AI used for deepfakes, and to craft better spearphishing emails, but we’ve yet to see a big, game-changing incident where AI did something that humans simply couldn’t have done. We haven’t seen anything like an agentic attacker or an agentic attack, or a self-perpetuated campaign.
I fully anticipate that these types of attacks are coming, so it’s crucial that AI developers collaborate across industry and with public sector partners to prepare defenders and ensure AI’s success. As part of our efforts to build partnerships, we worked with industry partners last year to launch the Coalition for Secure AI (CoSAI), an initiative to ensure the safe implementation of AI systems.
To further this work, we announced yesterday that Google will donate data from our Secure AI Framework (SAIF) to help accelerate CoSAI’s agentic AI, cyber defense, and software supply chain security workstreams.
At Google, we’ve been investing in AI and machine learning tools for more than a decade. While we have always believed in AI’s potential to help make software more secure, over the last year we have seen real leaps in its capabilities, with AI redefining what lasting and durable cybersecurity can look like.
You can learn more about our efforts to use AI to help secure and support organizations around the world from our Office of the CISO.
Here are the latest updates, products, services, and resources from our security teams so far this month:
Please visit the Google Cloud blog for more security stories published this month.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in a few weeks with more security-related updates from Google Cloud.
Details on how the big diffusion model finetunes are trained is scarce, so just like…
Our advanced model officially achieved a gold-medal level performance on problems from the International Mathematical…
The ever-increasing parameter counts of deep learning models necessitate effective compression techniques for deployment on…
Extracting meaningful insights from unstructured data presents significant challenges for many organizations. Meeting recordings, customer…
The incident's legacy extends far beyond CrowdStrike. Organizations now implement staged rollouts and maintain manual…
“Unfortunately, I think ‘No bad person should ever benefit from our success’ is a pretty…