Monday, October 23, 2023
HomeArtificial IntelligenceOur commitments to advance protected, safe, and reliable AI

Our commitments to advance protected, safe, and reliable AI


In the present day, Microsoft is saying its assist for brand new voluntary commitments crafted by the Biden-Harris administration to assist be certain that superior AI methods are protected, safe, and reliable. By endorsing the entire voluntary commitments introduced by President Biden and independently committing to a number of others that assist these essential objectives, Microsoft is increasing its protected and accountable AI practices, working alongside different trade leaders. 

By transferring rapidly, the White Home’s commitments create a basis to assist make sure the promise of AI stays forward of its dangers. We welcome the President’s management in bringing the tech trade collectively to hammer out concrete steps that can assist make AI safer, safer, and extra useful for the general public. 

Guided by the enduring ideas of security, safety, and belief, the voluntary commitments deal with the dangers introduced by superior AI fashions and promote the adoption of particular practices – equivalent to red-team testing and the publication of transparency reviews – that can propel the entire ecosystem ahead. The commitments construct upon sturdy pre-existing work by the U.S. Authorities (such because the NIST AI Danger Administration Framework and the Blueprint for an AI Invoice of Rights) and are a pure complement to the measures which have been developed for high-risk functions in Europe and elsewhere. We stay up for their broad adoption by trade and inclusion within the ongoing world discussions about what an efficient worldwide code of conduct would possibly appear like. 

Chart of commitments Microsoft’s extra commitments concentrate on how we’ll additional strengthen the ecosystem and operationalize the ideas of security, safety, and belief. From supporting a pilot of the Nationwide AI Analysis Useful resource to advocating for the institution of a nationwide registry of high-risk AI methods, we imagine that these measures will assist advance transparency and accountability. Now we have additionally dedicated to broad-scale implementation of the NIST AI Danger Administration Framework, and adoption of cybersecurity practices which can be attuned to distinctive AI dangers. We all know that this may result in extra reliable AI methods that profit not solely our prospects, however the entire of society. 

You possibly can view the detailed commitments Microsoft has made right here. 

It takes a village to craft commitments equivalent to these and put them into observe at Microsoft. I wish to take this chance to thank Kevin Scott, Microsoft’s Chief Expertise Officer, with whom I co-sponsor our accountable AI program, in addition to Natasha Crampton, Sarah Chicken, Eric Horvitz, Hanna Wallach, and Ece Kamar, who’ve performed key management roles in our accountable AI ecosystem.  

Because the White Home’s voluntary commitments replicate, folks should stay on the heart of our AI efforts and I’m grateful to have sturdy management in place at Microsoft to assist us ship on our commitments and proceed to develop this system now we have been constructing for the final seven years. Establishing codes of conduct early within the growth of this rising know-how is not going to solely assist guarantee security, safety, and trustworthiness, it’s going to additionally enable us to higher unlock AI’s optimistic influence for communities throughout the U.S. and around the globe. 



Supply hyperlink

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments