Tuesday, September 17, 2024
HomeSoftware EngineeringCounter AI, Coordinated Vulnerability Disclosure, and Synthetic Intelligence Engineering

Counter AI, Coordinated Vulnerability Disclosure, and Synthetic Intelligence Engineering


As a part of an ongoing effort to maintain you knowledgeable about our newest work, this weblog submit summarizes some latest publications from the SEI within the areas of counter synthetic intelligence (AI), coordinated vulnerability disclosure for machine studying (ML) and AI, safe growth, cybersecurity, and synthetic intelligence engineering.

These publications spotlight the most recent work from SEI technologists in these areas. This submit features a itemizing of every publication, authors, and hyperlinks the place they are often accessed on the SEI web site.

Counter AI: What Is It and What Can You Do About It?
By Nathan M. VanHoudnos, Carol J. Smith, Matt Churilla, Shing-hon Lau, Lauren McIlvenny, and Greg Touhill

Because the strategic significance of AI will increase, so too does the significance of defending these AI techniques. To grasp AI protection, it’s essential to know AI offense—that’s, counter AI. This paper describes counter AI. First, we describe the applied sciences that compose AI techniques (the AI Stack) and the way these techniques are inbuilt a machine studying operations (MLOps) lifecycle. Second, we describe three sorts of counter-AI assaults throughout the AI Stack and 5 menace fashions detailing when these assaults happen inside the MLOps lifecycle.

Lastly, based mostly on Software program Engineering Institute analysis and follow in counter AI, we give two suggestions. In the long run, the sector ought to put money into AI engineering analysis that fosters processes, procedures, and mechanisms that scale back the vulnerabilities and weaknesses being launched into AI techniques. Within the close to time period, the sector ought to develop the processes essential to effectively reply to and mitigate counter-AI assaults, equivalent to constructing an AI Safety Incident Response Group and lengthening present cybersecurity processes just like the Laptop Safety Incident Response Group Companies Framework.
Learn the SEI white paper.

Classes Realized in Coordinated Disclosure for Synthetic Intelligence and Machine Studying Techniques
by Allen D. Householder, Vijay S. Sarvepalli, Jeff Havrilla, Matt Churilla, Lena Pons, Shing-hon Lau, Nathan M. VanHoudnos, Andrew Kompanek, and Lauren McIlvenny

On this paper, SEI researchers incorporate a number of classes realized from the coordination of synthetic intelligence (AI) and machine studying (ML) vulnerabilities on the SEI’s CERT Coordination Middle (CERT/CC). Additionally they embody their observations of public discussions of AI vulnerability coordination instances.

Danger administration inside the context of AI techniques is a quickly evolving and substantial house. Even when restricted to cybersecurity danger administration, AI techniques require complete safety, equivalent to what the Nationwide Institute of Requirements and Know-how (NIST) describes in The NIST Cybersecurity Framework (CSF).

On this paper, the authors deal with one a part of cybersecurity danger administration for AI techniques: the CERT/CC’s classes realized from making use of the Coordinated Vulnerability Disclosure (CVD) course of to reported “vulnerabilities” in AI and ML techniques.
Learn the SEI white paper.

On the Design, Improvement, and Testing of Fashionable APIs
by Alejandro Gomez and Alex Vesey

Utility programming interfaces (APIs) are a basic element of recent software program functions; thus, practically all software program engineers are designers or customers of APIs. From meeting instruction labels that present reusable code to the highly effective web-based utility programming interfaces (APIs) of at present, APIs allow highly effective abstractions by making the system’s operations out there to customers, whereas limiting the main points of how the APIs are carried out and thus enhancing flexibility of implementation and facilitating replace.

APIs present entry to difficult performance inside giant codebases labored on by dozens if not a whole bunch of individuals, typically rotating out and in of initiatives whereas concurrently coping with altering necessities in an more and more adversarial surroundings. Beneath these circumstances, an API should proceed to behave as anticipated; in any other case, calling functions inherit the unintended habits the API system supplies. As techniques develop in complexity and dimension, the necessity for clear, concise, and usable APIs will stay.

On this context, this white paper addresses the next questions regarding APIs:

  • What’s an API?
  • What elements drive API design?
  • What qualities do good APIs exhibit?
  • What particular socio-technical features of DevSecOps apply to the event, safety, and operational help of APIs?
  • How are APIs examined, from the techniques and software program safety patterns standpoint?
  • What cybersecurity and different finest practices apply to APIs?

Learn the white paper.

Embracing AI: Unlocking Scalability and Transformation By way of Generative Textual content, Imagery, and Artificial Audio
by Tyler Brooks, Shannon Gallagher, and Dominic A. Ross

The potential of generative synthetic intelligence (AI) extends effectively past automation of present processes, making “digital transformation” a risk for a quickly rising set of functions. On this webcast, Tyler Brooks, Shannon Gallagher, and Dominic Ross intention to demystify AI and illustrate its transformative energy in attaining scalability, adapting to altering landscapes, and driving digital innovation. The audio system discover sensible functions of generative textual content, imagery, and artificial audio, with an emphasis on showcasing how these applied sciences can revolutionize many sorts of workflows.

What attendees will be taught:

  • Sensible functions of generative textual content, imagery, and artificial audio
  • Impression on the scalability of instructional content material supply
  • How artificial audio is reworking AI training

View the webcast.

Evaluating Massive Language Fashions for Cybersecurity Duties: Challenges and Greatest Practices
by Jeff Gennari and Samuel J. Perl

How can we successfully use giant language fashions (LLMs) for cybersecurity duties? On this podcast, Jeff Gennari and Sam Perl talk about functions for LLMs in cybersecurity, potential challenges, and proposals for evaluating LLMs.
Take heed to/view the podcast.

Utilizing High quality Attribute Situations for ML Mannequin Check Case Era
by Rachel Brower-Sinning, Grace Lewis, Sebastián Echeverría, and Ipek Ozkaya

Testing of machine studying (ML) fashions is a rising problem for researchers and practitioners alike. Sadly, present follow for testing ML fashions prioritizes testing for mannequin perform and efficiency, whereas typically neglecting the necessities and constraints of the ML-enabled system that integrates the mannequin. This restricted view of testing can result in failures throughout integration, deployment, and operations, contributing to the difficulties of shifting fashions from growth to manufacturing. This paper presents an strategy based mostly on high quality attribute (QA) situations to elicit and outline system- and model-relevant check instances for ML fashions. The QA-based strategy described on this paper has been built-in into MLTE, a course of and power to help ML mannequin check and analysis. Suggestions from customers of MLTE highlights its effectiveness in testing past mannequin efficiency and figuring out failures early within the growth course of.
Learn the convention paper.



Supply hyperlink

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments