Thursday, November 2, 2023
HomeCyber SecurityGoogle Presents Bug Bounties for Generative AI Safety Vulnerabilities

Google Presents Bug Bounties for Generative AI Safety Vulnerabilities


Google’s Vulnerability Reward Program gives as much as $31,337 for locating potential hazards. Google joins OpenAI and Microsoft in rewarding AI bug hunts.

Google logo at Googleplex Silicon Valley Mountain View in California.
Picture: Markus Mainka/Adobe Inventory

Google expanded its Vulnerability Rewards Program to incorporate bugs and vulnerabilities that could possibly be present in generative AI. Particularly, Google is searching for bug hunters for its personal generative AI, merchandise reminiscent of Google Bard, which is obtainable in lots of international locations, or Google Cloud’s Contact Heart AI, Agent Help.

“We imagine it will incentivize analysis round AI security and safety, and produce potential points to mild that may finally make AI safer for everybody,” Google’s Vice President of Belief and Security Laurie Richardson and Vice President of Privateness, Security and Safety Engineering Royal Hansen wrote in an Oct. 26 weblog put up. “We’re additionally increasing our open supply safety work to make details about AI provide chain safety universally discoverable and verifiable.”

Leap to:

Google’s bug bounty program: Limitations and rewards

There are limitations as to what counts as a vulnerability in generative AI; an entire listing of what vulnerabilities Google considers in scope or out of scope for the Vulnerability Rewards Program is in this Google safety weblog.

Generative AI introduces dangers conventional computing doesn’t; these dangers embrace unfair bias, mannequin manipulation and misinterpretations of knowledge, Richardson and Hansen wrote. Notably, AI “hallucinations” — misinformation generated inside a personal looking session — don’t depend as vulnerabilities for the needs of the Vulnerability Rewards Program. Assaults that expose delicate data, change the state of a Google person’s account with out their consent or present backdoors right into a generative AI mannequin are inside scope.

In the end, anybody collaborating within the bug bounty must show that the vulnerability they uncover may “pose a compelling assault situation or possible path to Google or person hurt,” in line with the Google safety weblog.

Doable Google AI bug bounty rewards

Rewards for the Vulnerability Rewards Program vary from $100 to $31,337, relying on the kind of vulnerability. Particulars on rewards, payouts will be discovered on Google’s Bug Hunters web site.

Different bug bounties and customary assault sorts in generative AI

OpenAI, Microsoft and different organizations supply bug bounties for white hat hackers who discover vulnerabilities in generative AI programs. Microsoft gives between $2,000 and $15,000 for qualifying bugs. OpenAI’s bug bounty program will give between $200 and $20,000.

SEE: IBM X-Pressure researchers discovered phishing emails written by persons are barely extra prone to get clicks than these written by ChatGPT. (TechRepublic)

In an October 26 report, HackerOne and OWASP discovered that the commonest vulnerability in generative AI was immediate injection (i.e., utilizing prompts to make the AI mannequin do one thing it was not supposed to do), adopted by insecure output dealing with (i.e., when LLM output is accepted with out scrutiny) and the manipulation of coaching information.

The way to study to make use of generative AI

Builders and safety researchers simply beginning out with generative AI have loads of choices in the case of studying how one can use it, from experimenting with free functions reminiscent of ChatGPT to taking skilled programs. DeepLearning.AI has programs at each newbie and superior ranges for professionals who need to discover ways to use and develop for synthetic intelligence and machine studying.



Supply hyperlink

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments