Google intensiviert die Sicherheit seiner KI-Dienste mit einem neuen Bug-Bounty-Programm.
In Kürze
- Belohnungen bis zu 30.000 US-Dollar für Sicherheitsforscher
- Fokus auf tiefere Sicherheitsprobleme und „unsichtbare“ Manipulationen
- Neues KI-Tool „CodeMender“ zur Verbesserung von Open-Source-Sicherheit
Google’s New Bug-Bounty Program for AI Products
Google has launched a new bug-bounty program specifically focused on its AI products. Security researchers who identify significant flaws in applications like the Gemini app or AI search can earn rewards of up to $30,000. With this step, Google aims to intensify collaboration with external experts while simultaneously elevating the security of its AI services to a new level.
Focus on Deep Security Issues
The program not only rewards the discovery of superficial errors but also aims to identify deeper security issues. These include „invisible“ manipulations that can affect the security or functionality of an account unnoticed. Particularly innovative and well-documented discoveries have the chance to reach the maximum reward—a strong incentive likely to attract many security researchers.
Introduction of Google’s DeepMind Tool „CodeMender“
In parallel, Google’s DeepMind has introduced a new AI tool called „CodeMender.“ This tool specializes in making existing code more secure by identifying and fixing vulnerabilities in open-source projects. In recent months, CodeMender has successfully addressed numerous security issues.
Google’s Commitment to AI Security
With these initiatives, Google demonstrates that it is not only focused on the development of AI technologies but also actively works on their security.
Quellen
- Quelle: Google
- Der ursprüngliche Artikel wurde hier veröffentlicht
- Dieser Artikel wurde im Podcast KI-Briefing-Daily behandelt. Die Folge kannst du hier anhören.