![CodeMender]()
Image Courtesy: Google
October 6, 2025
Google today announced a series of major initiatives aimed at strengthening AI security and helping the global tech community build safer AI systems. The updates include the launch of CodeMender, an AI-powered agent that automatically fixes code vulnerabilities, a new AI Vulnerability Reward Program (AI VRP) for researchers, and the release of Secure AI Framework (SAIF) 2.0, an updated set of industry guidelines for securing autonomous AI agents.
CodeMender: AI that fixes code automatically
At the center of the announcement is CodeMender, a new autonomous AI agent developed by Google that uses Gemini models to identify and repair security flaws in code. Unlike traditional tools, CodeMender performs root cause analysis to detect vulnerabilities and generates validated patches that are reviewed by specialized AI “critique” agents before being finalized by human engineers.
Google says CodeMender will help accelerate time-to-patch across open-source ecosystems and strengthen proactive defenses against growing cyber threats.
“AI can be a game-changing tool for cyber defense,” said Evan Kotsovinos, Vice President of Privacy, Safety & Security at Google. “With CodeMender, we’re tipping the scales in favor of defenders by making vulnerability discovery and patching faster, smarter, and safer.”
Expanding collaboration: The AI Vulnerability Reward Program
Building on its long-running security reward efforts, Google has also launched the AI Vulnerability Reward Program, designed to encourage security researchers to identify and report AI-related vulnerabilities.
The new program consolidates security and abuse-related reporting under a single set of rules and reward tables, simplifying submissions and improving transparency.
Since inception, Google’s AI-related VRPs have paid out more than $430,000 in rewards to researchers.
Strengthening AI systems: Secure AI Framework 2.0
To address the risks of increasingly autonomous AI systems, Google introduced Secure AI Framework (SAIF) 2.0, an expanded version of its earlier AI security guidelines.
SAIF 2.0 includes:
A new agent risk map to identify and manage threats across the AI stack.
New security capabilities ensuring AI agents remain secure by design, with well-defined human oversight and clear operational limits.
An industry contribution of risk map data to the Coalition for Secure AI (CoSAI) initiative, supporting shared standards for AI risk management.
A proactive approach to AI security
Google says these initiatives reflect its broader mission to use AI for good — not only defending against cyber threats but actively improving global digital security.
The company continues to collaborate with public and private partners, including DARPA and CoSAI, to ensure that advancements in AI strengthen rather than endanger cybersecurity.
“Our goal is to make AI a decisive advantage for defenders,” said Four Flynn, VP of Security for Google DeepMind. “With CodeMender, the AI VRP, and SAIF 2.0, we’re helping secure the future of AI itself.”