Legit Security, an application security posture management (ASPM) platform, has released the cybersecurity industry's first AI detection capabilities. This technology enables a chief information security officer and her AppSec team to investigate when and where AI code is used, ensuring that application delivery is secure while maintaining software development momentum. Provides control and extensive visibility.
As developers rapidly leverage the potential of AI and large-scale language models (LLMs) to evolve and deploy capabilities, a variety of new risks emerge. These include AI-generated code that may harbor unknown vulnerabilities and flaws that can pose risks to the entire application. Additionally, AI-generated code can create legal issues if copyright restrictions are in place.
A further danger is improper implementation of AI capabilities, which can lead to data breaches. Despite these potential threats, security teams often have limited understanding of the use of AI-generated code and security concerns that impact both the organization and the software supply chain. Blind spots arise.
“There is a huge disconnect between what CISOs and their teams believe to be true and what is actually happening in development,” says Co-founder of the Berryville Institute for Machine Learning (BILM) and author of Dr. Gary McGraw, author of “Software Security'' commented: “This gap in understanding is particularly acute when it comes to why, when, and how developers adopt his AI technology.”
A recent BIML publication, Architectural Risk Analysis for Large-Scale Language Models, identifies 81 specific LLM risks, including the top 10 critical ones. Dr. McGraw says these risks cannot be mitigated without a comprehensive understanding of where AI is being used.
Legit Security's platform gives security leaders, including CISOs, product security leaders, and security architects, a comprehensive view of potential risks across the development pipeline. With clear visibility into the development lifecycle, clients can rest assured that their code is secure, compliant, and traceable. These new AI code detection capabilities strengthen the platform by closing the visibility gap, enabling security to act proactively and reduce the risk of legal exposure while ensuring compliance.
“AI offers huge potential for developers to deliver faster and innovate, but they must understand the risks it may introduce,” said Co-Founder and Chief Technology Officer at Legit Security. said Liav Caspi, Director. “Our goal is to provide developers with the peace of mind that comes with visibility and control over their AI and LLM applications, while ensuring that nothing holds them back.” When we showed them how and where AI was being used, it was a revelation.”
Legit's AI code discovery capabilities provide myriad benefits, including complete visibility into your application environment, including a complete view of your development environment, repositories with LLM, MLOps services, code generation tools, and more. This unique platform can detect LLM and GenAI development and enforce organizational security policies, such as requiring human review of all code generated by AI.
Other features include real-time notifications for GenAI code, which increases transparency and accountability, and provides guardrails to prevent vulnerable code from being deployed into production. Legit can also scan LLM application code for security risks and alert you to LLM risks.