Cyber adversaries' ability to invent new techniques that tilt the AI war in their favor is happening faster than anyone anticipated, and cybersecurity vendors are redoubling to rapidly improve their arsenals. doing.
But what if that's not enough? Given how all enterprises are rapidly adopting AI and how new generative AI-based security technologies are needed. This is at the heart of how Menlo Ventures has chosen to evaluate eight areas where Gen AI is having a significant impact.
Get ahead of new threats now
VentureBeat recently spoke (virtually) with Menlo Ventures' Rama Sekhar and Feyza Haskaraman. Sekhar is a new partner at his Menlo Venture and focuses on cybersecurity, AI, and cloud infrastructure for him. Haskaraman is responsible for cybersecurity, SaaS, supply chain and automation. They have collaborated on a series of blog posts explaining why closing the AI security gap is critical as generative AI scales across organizations.
VB event
AI Impact Tour – New York
We're partnering with Microsoft to be in New York on February 29th to discuss how to balance the risks and benefits of AI applications. Request an invitation to an exclusive event below.
request an invitation
Throughout the interview, Sekhar and Haskaraman said that for AI to reach its full potential across the enterprise, an entirely new technology stack with security designed to start with the software supply chain and model development is needed. explained that it was necessary. Below he selects eight factors on how best to protect large-scale language models (LLMs) and models while mitigating risk, enhancing compliance, and achieving scale in model and LLM development. focus on.
Predict where generative AI will have the greatest impact
Here are the eight factors that Sekhar and Haskaraman predict will have the biggest impact:
Vendor risk management and compliance automation. According to Menlo Venture's predictions for how risk management will evolve, cybersecurity now includes securing the entire third-party application stack as enterprises communicate, collaborate, and integrate with third-party vendors and customers. It contains. Sekhar and Haskaraman say: tCurrent vendor security processes are laborious and error-prone, making them perfect candidates to automate and improve with Gen AI. Menlo Ventures cites his Dialect, an AI assistant that automatically fills out security and other questionnaires based on data and answers them quickly and accurately, as an example of a leading vendor in this space.
security training. Often criticized for a lack of results, as breaches still occur at companies that have invested heavily in this space, Menlo Ventures is confident that with Gen AI, We believe it will enable more customized, engaging and dynamic employee training content that better simulates risk. For example, Immersive Labs uses generative AI to simulate attacks and incidents for security teams. A security co-pilot coaches her Riot employees through interactive security awareness training in Slack or online. Menlo Ventures believes this type of technology can make security training more effective.
Penetration Testing (“Penetration Testing”). As Gen AI is used in attacks, penetration testing must be adaptable and flexible. More needs to be done to simulate more AI-automated attacks in succession. Menlo Ventures uses Gen AI to search for criminal characteristics in public and private databases, scan customers' IT environments, explore potential exploits, suggest remediation steps, and summarize findings in automatically generated reports. We believe we can strengthen our testing procedures.
Anomaly detection and prevention. Sekhar and Haskaraman said Gen AI will also improve anomaly detection and prevention by automatically monitoring event logs and telemetry data to detect anomalous activity that may predict intrusion attempts. thinking about. Gen AI shows the potential to scale across vulnerable endpoints, networks, APIs, and data repositories, further enhancing security across the broader network.
Synthetic content detection and validation. Cyber attackers use Gen AI to create convincing, high-fidelity digital identities that can bypass identity verification software, document verification software, and manual reviews. Cybercriminal organizations and nation-state actors use stolen data to create synthetic and fraudulent identities. The FTC estimates that each fraud case could cost him more than $15,000. Wakefield and Duducuce found that 76% of companies extend credit to synthetic customers, and that AI-driven identity fraud has increased by 17% over the past two years.
Next-generation verification helps businesses combat synthetic content. Deduce created a multi-context activity-backed identity graph of 840 million U.S. profiles to baseline authentic behavior and identify malicious actors. DeepTrust has developed an API-accessible model that detects audio clones, validates articles and transcripts, and identifies synthetic images and videos.
Code review. A “shift left” approach to software development prioritizes early testing to improve quality, software, security, and time to market. To effectively “shift left”, security must become the core of her CI/CD process. Too many automated security scans and SAST tools fail and waste security operations center analyst time. SOC analysts also tell VentureBeat that creating and validating custom rules is time-consuming and difficult to maintain. Menlo Ventures says startups are making progress in this area. Examples include his Semgrep's customizable rules that help security engineers and developers discover vulnerabilities and suggest fixes specific to their organization.
Dependency management. According to Synopsys' 2023 OSSRA report, 96% of the codebase was open source, and projects often involved hundreds of third-party vendors. Sekhar and Haskaraman told his VentureBeat that this is an area where we expect significant improvements thanks to generational AI. They pointed out that external dependencies, which are more difficult to control than internal code, require better traceability and patch management. Examples of vendors that help solve these challenges include proactively detecting and blocking over 70 supply chain risk signals in open source code, detecting suspicious package updates, and securing the supply chain. We have Sockets to create a security feedback loop into the development process.
Defense automation and SOAR capabilities. Gen AI has the potential to streamline much of the work being done in security operations centers, including improving alert fidelity and accuracy. SOCs have too many false alarms for analysts to follow up on, ultimately costing them time that could be used to complete more complex projects. In addition to this, data breaches can be missed due to false negatives, and generative AI can provide significant value to his SOC. The first goal is to reduce alert fatigue so that analysts can perform more valuable work.
Plan now for the emerging threat landscape
Sekhar and Haskaraman believe that for Gen AI to achieve enterprise-level growth, it must first solve the security challenges that all organizations face when working on their AI strategy. The eight areas where Gen AI will impact demonstrate how many organizations are unprepared to transition to an enterprise-wide AI strategy. Gen AI can eliminate the tedious and time-consuming tasks that SOC analysts waste their time on when digging into more complex projects. The eight areas affected are a start, and more needs to be done if organizations are to protect themselves from the onslaught of generational AI-based attacks.
VentureBeat's mission will be a digital town square for technical decision makers to gain knowledge and transact on transformative enterprise technologies. Please see the briefing.