Through the AI Cyber Defense Initiative, we continue to invest in AI-enabled infrastructure, release new tools for defenders, and launch new research and AI security training. These efforts aim to help AI protect, empower, and advance our collective digital future.
1. Safety. We believe that AI security technology, like any other technology, must be secure by design and by default. Failure to do so may deepen the defender's dilemma. This is why we launched the Secure AI Framework as a way to collaborate on best practices for securing AI systems. To build on these efforts and foster a more secure AI ecosystem, take the following steps:
- we Continue to invest in a secure, AI-enabled global data center network. To change the game in cyberspace, new AI innovations must be made available to public sector organizations and businesses of all sizes across industries. From 2019 to the end of 2024, the company will invest more than $5 billion in its European data centers to provide secure and reliable access to a wide range of digital services, including extensive generative AI capabilities like the Vertex AI platform. support high-quality access.
- Announcing new “AI for Cybersecurity” cohort 17 startups from the UK, US and EU have been selected for the Google for Startups Growth Academy's AI for Cybersecurity program. This will help strengthen the transatlantic cybersecurity ecosystem with internationalization strategies, AI tools, and skills to use them.
2. Empower. Today’s AI governance choices may change the landscape of cyberspace in unintended ways. Our society needs a balanced regulatory approach to the use and deployment of AI to avoid a future where attackers can innovate but defenders cannot. It requires targeted investments, industry and government partnerships, and effective regulatory approaches to help organizations maximize the value of AI while limiting its use to adversaries. To give the defender an advantage in this battle:
- Expanding $15 million Google.org Cyber security seminar program It covers all of Europe and was first announced at GSEC Malaga last year. The program includes modules focused on AI and will help the university develop the next generation of cybersecurity professionals from underserved communities.
- It's open sourced magicanew AI-powered tools to help defenders Through file type identification, which is essential for malware detection. Magika is already being used to protect products like Gmail, Drive, and Safe Browsing, as well as being used by the VirusTotal team to promote a safer digital environment. Magika outperforms traditional file identification methods, increasing overall accuracy by 30% for potentially problematic content that is traditionally difficult to identify, such as VBA, JavaScript, and Powershell. Accuracy improved by up to 95%.
3. Move forward. We are committed to advancing research that helps generate breakthroughs in AI-powered security. To support this effort, we are announcing the following: $2 million in research grants and strategic partnerships This will advance the use of AI in cybersecurity research, including enhancing code validation, better understanding how AI can help with cyberattacks and defensive countermeasures, and developing large-scale language models that are more resilient to threats. It will help you strengthen your efforts. The funding supports researchers at institutions such as the University of Chicago, Carnegie Mellon University, and Stanford University. This builds on his continued efforts to stimulate his cybersecurity ecosystem, including his $12 million commitment to New York's research system last year.
The AI revolution has already begun. While we rightly celebrate the potential of new medicines and scientific advances, we also celebrate the potential of AI to solve generational security challenges while moving us closer to the safe, secure, and trusted digital world we deserve. I am excited.