Highlights:
- With Lasso’s new custom policy wizard, seamlessly integrated with the company’s browser extension and secure gateway, administrators can easily establish policy guidelines using plain English.
- There is a provision to test policies against various use cases and a tuning process enabling employees to remain productive while adjustments are made.
Generative artificial intelligence security company Lasso Security Inc. recently launched a custom contextual policy wizard designed to help companies prevent data leaks when using tools like OpenAI’s ChatGPT.
Lasso Security’s data leak prevention solution offers comprehensive cybersecurity and data management solutions for large language models, providing enterprise end-to-end protection. It detects which AI apps and tools employees use and enables administrators to craft policies preventing data and knowledge leakage.
Data management has grown increasingly complex as AI tools have become standard in the workplace. Even as more third-party large language model services have put security and privacy rules about data passing through them, it’s still important for internal compliance to ensure that employees aren’t accidentally using prompts they shouldn’t with outside tools.
In the past, protection relied on rules-based policies that used patterns to detect problematic prompts given to large language models (LLMs). However, employees could unintentionally bypass these patterns by inputting prompts differently. With Lasso’s new custom policy wizard, seamlessly integrated with the company’s browser extension and secure gateway, administrators can easily establish policy guidelines using plain English.
“It’s all about the context,” Co-founder and Chief Product Officer of Lasso Security Ophir Dror told a well-known media outlet. “In order to solve the emerging problem of knowledge leak (as opposed to structured data leak), we completely shifted how we look at data protection. No more patterns or pre-defined regexes that fail to catch ‘near’ or ‘similar.’”
For instance, if a policy prohibits HR employees from discussing salaries, the AI engine comprehends and blocks interactions related to wages, compensation, and benefits within the organizational context. However, it would still permit discussions about general and publicly available information about salaries within the company.
The policies configured by administrators are live and can be adjusted at any time by administrators. Administrators receive alerts and telemetry regarding the performance of the policies, accompanied by a validation process upon the creation of any rule. Additionally, there is a provision to test policies against various use cases and a tuning process enabling employees to remain productive while adjustments are made.
From a user perspective, administrators can configure several options based on organizational policy. If an issue is detected, the session is blocked, prompting the user to create a new prompt to continue. An alert is generated for administrators, and an entry appears in the management console, enabling the admin to investigate further.
Dror said, “In the era of generative AI, traditional data protection mechanisms are not enough anymore. Structured data is still a concern, but a new concern now emerges — knowledge leakage. When an employee is sending specs of your next features, designers send briefs of future models, and finance personnel send budgets, the existing security stack fails.”