Thursday, June 27, 2024
HomeSoftware DevelopmentDeepKeep Launches GenAI Threat Evaluation Module

DeepKeep Launches GenAI Threat Evaluation Module


DeepKeep, the main supplier of AI-Native Belief, Threat, and Safety Administration, proclaims the product launch of its GenAI Threat Evaluation module, designed to safe GenAI’s LLM and laptop imaginative and prescient fashions, particularly specializing in penetration testing, figuring out potential vulnerabilities and threats to mannequin safety, trustworthiness and privateness.

Assessing and mitigating AI mannequin and utility vulnerabilities ensures implementations are compliant, honest and moral. DeepKeep‘s Threat Evaluation module presents a complete ecosystem strategy by contemplating dangers related to mannequin deployment, and figuring out utility weak spots.

DeepKeep’s evaluation offers a radical examination of AI fashions, making certain excessive requirements of accuracy, integrity, equity, and effectivity. The module helps safety groups streamline GenAI deployment processes, granting a variety of scoring metrics for analysis.

Core options embody:

  • Penetration Testing
  • Figuring out the mannequin’s tendency to hallucinate
  • Figuring out the mannequin’s propensity to leak non-public knowledge
  • Assessing poisonous, offensive, dangerous, unfair, unethical, or discriminatory language
  • Assessing biases and equity
  • Weak spot evaluation

For instance, when making use of DeepKeep’s Threat Evaluation module to Meta’s LLM LlamaV2 7B to look at immediate manipulation sensitivity, findings pointed to a weak spot in English-to-French translation as depicted within the chart under*:

“The market should be capable of belief its GenAI fashions, as increasingly enterprises incorporate GenAI into every day enterprise processes,” says Rony Ohayon, DeepKeep’s CEO and Founder. “Evaluating mannequin resilience is paramount, significantly throughout its inference part in an effort to present insights into the mannequin’s capability to deal with varied situations successfully. DeepKeep’s aim is to empower companies with the arrogance to leverage GenAI applied sciences whereas sustaining excessive requirements of transparency and integrity.”

DeepKeep’s GenAI Threat Evaluation module secures AI alongside its AI Firewall, enabling stay safety towards assaults on AI purposes. Detection capabilities cowl a variety of safety and security classes, leveraging DeepKeep’s proprietary expertise and cutting-edge analysis.

*ROUGE and METEOR are pure language processing (NLP) strategies for evaluating machine studying outputs. Scores vary between 0-1, with 1 indicating perfection.



Supply hyperlink

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments