Approximately 77% of organizations have implemented or are considering some form of AI to drive more efficient and automated workflows.
With increasing reliance on LLMs such as GenAI models and ChatGPT, the need for robust security measures became paramount, leading Akto to launch the GenAI Security Testing Solution.
“Akto has new capabilities to scan APIs powered by AI technology, which is fundamental to the future of application security. We are excited to see security companies in the world doing the same for security assessments around AI technology,” said Jim Manico, former OWASP global board member and secure coding educator.
On average, organizations use 10 GenAI models. Most LLMs in production environments often receive data indirectly through APIs. This means that a large amount of sensitive data is being processed by the LLM API. Ensuring the security of these APIs is critical to protecting user privacy and preventing data leaks. There are currently several ways in which LLM can be misused and lead to the disclosure of sensitive data.
- Prompt injection vulnerability – The risk of unauthorized prompt injection, where the output of an LLM is manipulated by malicious input, is a major concern.
- Denial of Service (DoS) threats – LLM is also susceptible to DoS attacks, where the system is overloaded with requests and leads to service interruption. Last year, we saw an increase in the number of reported DoS incidents targeting the LLM API.
- Excessive reliance on LLM output – Over-reliance on LLMs without proper verification mechanisms has led to cases of data inaccuracies and leaks. As the industry is seeing an increase in data breach incidents due to over-reliance on LLM, organizations are encouraged to implement robust validation processes.
“Ensuring the security of GenAI systems requires a multifaceted approach, as the AI must not only be protected from external inputs, but also the external systems that rely on its outputs,” says the OWASP Top 10 for said LLM AI Applications Core team member.
On March 20, 2023, OpenAI's AI tool ChatGPT experienced an outage. The outage was caused by a vulnerability in an open source library that may have exposed payment-related information for some customers. Most recently, on January 25, 2024, a critical vulnerability was discovered in Anything LLM (Github Stars 8,000). This vulnerability converts any document or content into a context that LLM can use during a chat.
An unauthenticated API route (file export) could allow an attacker to crash the server and cause a denial of service attack. These are just a few examples of security incidents related to the use of LLM models.
Akto's GenAI security testing solution tackles these challenges head-on. Akto provides comprehensive security assessment of GenAI models, including LLM, by leveraging advanced testing methodologies and algorithms.
The solution incorporates a wide range of innovative features, including over 60 carefully designed test cases covering various aspects of GenAI vulnerabilities, such as prompted injection, over-reliance on specific data sources, etc. I am. These test cases were developed by Akto's team of GenAI security experts to ensure the highest level of protection for organizations deploying GenAI models.
Our security team currently manually tests all LLM APIs for flaws before release. Due to time constraints on product releases, teams can only test a small number of vulnerabilities. As hackers continue to seek more creative ways to exploit his LLM, security teams must find automated ways to protect his LLM at scale.
“Often, the input to an LLM comes from an end user, the output is displayed to an end user, or both. This test uses different encoding methods, delimiters, and markers to Attempts to exploit vulnerabilities. It specifically detects weak security practices where developers encode input or place special markers around input.” Ankush Jain, CTO, Akto.io says:
AI security testing identifies vulnerabilities in security measures to sanitize LLM output. This is intended to detect attempts to inject malicious code for remote execution, cross-site scripting (XSS), and other attacks that could allow an attacker to extract session tokens and system information. is. Additionally, Akto also tests whether LLMs can generate false or irrelevant reports.
“From prompt injection (LLM:01) to overreliance (LLM09), new vulnerabilities and breaches emerge every day, making our systems secure by default. Testing is key, and I can't wait to see what Akto has in store for my LLM projects,” said OWASP Top 10 for LLM AI Applications Core Team Member.
To further emphasize the importance of GenAI security, a recent study conducted by Gartner in September 2023 found that 34% of organizations already use or deploy AI application security tools to reduce the risks associated with GenAI. It became clear that 56% of respondents said they were also considering such a solution, highlighting the critical need for a robust security testing solution like Akto.
As organizations seek to harness the power of AI, Akto is at the forefront of ensuring the security and integrity of these innovative technologies. The announcement of the GenAI security testing solution strengthens the company's commitment to innovation and to enabling organizations to deploy his GenAI with confidence.