Monday, January 9, 2023
HomeSoftware EngineeringThe Prime 10 Weblog Posts of 2022

The Prime 10 Weblog Posts of 2022


Each January on the SEI Weblog, we current the 10-most visited posts of the earlier yr. This yr’s record of high 10 posts highlights our work in deepfakes, synthetic intelligence, machine studying, DevSecOps, and zero belief. Posts, which have been revealed between January 1, 2022 and December 31, 2022, are offered under in reverse order primarily based on the variety of visits.

#10 Most likely Don’t Depend on EPSS But
by Jonathan Spring

Vulnerability administration includes discovering, analyzing, and dealing with new or reported safety vulnerabilities in info methods. The companies offered by vulnerability administration methods are important to each pc and community safety. This weblog publish evaluates the professionals and cons of the Exploit Prediction Scoring System (EPSS), which is a data-driven mannequin designed to estimate the likelihood that software program vulnerabilities shall be exploited in observe.

The EPSS mannequin was initiated in 2019 in parallel with our criticisms of the Frequent Vulnerability Scoring System (CVSS) in 2018. EPSS was developed in parallel with our personal try at enhancing CVSS, the Stakeholder-Particular Vulnerability Categorization (SSVC); 2019 additionally noticed model 1 of SSVC. This publish will concentrate on EPSS model 2, launched in February 2022, and when it’s and isn’t applicable to make use of the mannequin. This newest launch has created a number of pleasure round EPSS, particularly since enhancements to CVSS (model 4) are nonetheless being developed. Sadly, the applicability of EPSS is way narrower than folks would possibly anticipate. This publish will present my recommendation on how practitioners ought to and mustn’t use EPSS in its present kind.
Learn the publish in its entirety.

#9 Containerization on the Edge
by Kevin Pitstick and Jacob Ratzlaff

Containerization is a expertise that addresses most of the challenges of working software program methods on the edge. Containerization is a virtualization methodology the place an utility’s software program information (together with code, dependencies, and configuration information) are bundled right into a bundle and executed on a bunch by a container runtime engine. The bundle is named a container picture, which then turns into a container when it’s executed. Whereas just like digital machines (VMs), containers don’t virtualize the working system kernel (often Linux) and as a substitute use the host’s kernel. This method removes a number of the useful resource overhead related to virtualization, although it makes containers much less remoted and transportable than digital machines.

Whereas the idea of containerization has existed since Unix’s chroot system was launched in 1979, it has escalated in recognition over the previous a number of years after Docker was launched in 2013. Containers are actually broadly used throughout all areas of software program and are instrumental in lots of initiatives’ steady integration/steady supply (CI/CD) pipelines. On this weblog publish, we focus on the advantages and challenges of utilizing containerization on the edge. This dialogue may help software program architects analyze tradeoffs whereas designing software program methods for the sting.
Learn the publish in its entirety.

#8 Techniques and Patterns for Software program Robustness
by Rick Kazman

Robustness has historically been regarded as the power of a software-reliant system to maintain working, in step with its specs, regardless of the presence of inner failures, defective inputs, or exterior stresses, over an extended time period. Robustness, together with different high quality attributes, equivalent to safety and security, is a key contributor to our belief {that a} system will carry out in a dependable method. As well as, the notion of robustness has extra just lately come to embody a system’s capability to resist modifications in its stimuli and atmosphere with out compromising its important construction and traits. On this latter notion of robustness, methods must be malleable, not brittle, with respect to modifications of their stimuli or environments. Robustness, consequently, is a extremely essential high quality attribute to design right into a system from its inception as a result of it’s unlikely that any nontrivial system might obtain this high quality with out conscientious and deliberate engineering. On this weblog publish, which is excerpted and tailored from a just lately revealed technical report, we’ll discover robustness and introduce techniques and patterns for understanding and reaching robustness.
Learn the publish in its entirety.
View a podcast on this work.

#7 The Zero Belief Journey: 4 Phases of Implementation
by Timothy Morrow and Matthew Nicolai

Over the previous a number of years, zero belief structure has emerged as an essential matter throughout the subject of cybersecurity. Heightened federal necessities and pandemic-related challenges have accelerated the timeline for zero belief adoption throughout the federal sector. Non-public sector organizations are additionally seeking to undertake zero belief to deliver their technical infrastructure and processes consistent with cybersecurity greatest practices. Actual-world preparation for zero belief, nevertheless, has not caught up with present cybersecurity frameworks and literature. NIST requirements have outlined the specified outcomes for zero belief transformation, however the implementation course of remains to be comparatively undefined. Zero belief can’t be merely applied by off-the-shelf options because it requires a complete shift in direction of proactive safety and steady monitoring. On this publish, we define the zero belief journey, discussing 4 phases that organizations ought to tackle as they develop and assess their roadmap and related artifacts towards a zero belief maturity mannequin.

Overview of the Zero Belief Journey

Because the nation’s first federally funded analysis and growth heart with a transparent emphasis on cybersecurity, the SEI is uniquely positioned to bridge the hole between NIST requirements and real-world implementation. As organizations transfer away from the perimeter safety mannequin, many are experiencing uncertainty of their seek for a transparent path in direction of adopting zero belief. Zero belief is an evolving set of cybersecurity paradigms that transfer defenses from static, network-based perimeters to concentrate on customers, property, and sources. The CERT Division on the Software program Engineering Institute has outlined a number of steps that organizations can take to implement and keep zero belief structure, which makes use of zero belief rules to plan industrial and enterprise infrastructure and workflows. These steps collectively kind the idea of the zero belief journey.
Learn the publish in its entirety.
View a podcast on this work.

#6 Two Classes of Structure Patterns for Deployability
by Rick Kazman

Aggressive pressures in lots of domains, in addition to growth paradigms equivalent to Agile and DevSecOps, have led to the more and more widespread observe of steady supply or steady deployment—speedy and frequent modifications and updates to software program methods. In at present’s methods, releases can happen at any time—probably lots of of releases per day—and every might be instigated by a unique workforce inside a corporation. With the ability to launch steadily signifies that bug fixes and safety patches do not need to attend till the following scheduled launch, however reasonably might be made and launched as quickly as a bug is found and stuck. It additionally signifies that new options needn’t be bundled right into a launch however might be put into manufacturing at any time. On this weblog publish, excerpted from the fourth version of Software program Structure in Observe, which I coauthored with Len Bass and Paul Clements, I focus on the standard attribute of deployability and describe two related classes of structure patterns: patterns for structuring companies and for tips on how to deploy companies.

Steady deployment isn’t fascinating, and even doable, in all domains. In case your software program exists in a posh ecosystem with many dependencies, it might not be doable to launch only one a part of it with out coordinating that launch with the opposite components. As well as, many embedded methods, methods residing in hard-to-access areas, and methods that aren’t networked can be poor candidates for a steady deployment mindset.

This publish focuses on the big and rising numbers of methods for which just-in-time characteristic releases are a big aggressive benefit, and just-in-time bug fixes are important to security or safety or steady operation. Usually these methods are microservice and cloud-based, though the methods described right here usually are not restricted to these applied sciences.
Learn the publish in its entirety.
View an SEI podcast on this matter.

#5 A Case Examine in Making use of Digital Engineering
by Nataliya Shevchenko and Peter Capell

A longstanding problem in massive software-reliant methods has been to supply system stakeholders with visibility into the standing of methods as they’re being developed. Such info isn’t all the time straightforward for senior executives and others within the engineering path to accumulate when wanted. On this weblog publish, we current a case examine of an SEI mission during which digital engineering is getting used efficiently to supply visibility of merchandise below growth from inception in a requirement to supply on a platform.

One of many customary conventions for speaking in regards to the state of an acquisition program is the program administration evaluation (PMR). Because of the accumulation of element offered in a typical PMR, it may be arduous to establish duties which can be most urgently in want of intervention. The promise of contemporary expertise, nevertheless, is that a pc can increase human capability to establish counterintuitive facets of a program, successfully rising its accuracy and high quality. Digital engineering is a expertise that may

  • improve the visibility of what’s most pressing and essential
  • establish how modifications which can be launched have an effect on a complete system, in addition to components of it
  • allow stakeholders of a system to retrieve well timed details about the standing of a product transferring by the event lifecycle at any cut-off date

Learn the publish in its entirety.

#4 A Hitchhiker’s Information to ML Coaching Infrastructure
by Jay Palat

{Hardware} has made a huge effect on the sphere of machine studying (ML). Lots of the concepts we use at present have been revealed a long time in the past, however the fee to run them and the info vital have been too costly, making them impractical. Current advances, together with the introduction of graphics processing models (GPUs), are making a few of these concepts a actuality. On this publish we’ll take a look at a number of the {hardware} elements that affect coaching synthetic intelligence (AI) methods, and we’ll stroll by an instance ML workflow.

Why is {Hardware} Necessary for Machine Studying?

{Hardware} is a key enabler for machine studying. Sara Hooker, in her 2020 paper “The {Hardware} Lottery” particulars the emergence of deep studying from the introduction of GPUs. Hooker’s paper tells the story of the historic separation of {hardware} and software program communities and the prices of advancing every subject in isolation: that many software program concepts (particularly ML) have been deserted due to {hardware} limitations. GPUs allow researchers to beat a lot of these limitations due to their effectiveness for ML mannequin coaching.
Learn the publish in its entirety.

#3 A Technical DevSecOps Adoption Framework
by Vanessa Jackson and Lyndsi Hughes

DevSecOps practices, together with continuous-integration/continuous-delivery (CI/CD) pipelines, allow organizations to answer safety and reliability occasions shortly and effectively and to provide resilient and safe software program on a predictable schedule and finances. Regardless of rising proof and recognition of the efficacy and worth of those practices, the preliminary implementation and ongoing enchancment of the methodology might be difficult. This weblog publish describes our new DevSecOps adoption framework that guides you and your group within the planning and implementation of a roadmap to useful CI/CD pipeline capabilities. We additionally present perception into the nuanced variations between an infrastructure workforce centered on implementing a DevSecOps paradigm and a software-development workforce.

A earlier publish offered our case for the worth of CI/CD pipeline capabilities and we launched our framework at a excessive degree, outlining the way it helps set priorities throughout preliminary deployment of a growth atmosphere able to executing CI/CD pipelines and leveraging DevSecOps practices.
Learn the publish in its entirety.

#2 What’s Explainable AI?
by Violet Turri

Think about a manufacturing line during which staff run heavy, doubtlessly harmful gear to fabricate metal tubing. Firm executives rent a workforce of machine studying (ML) practitioners to develop a man-made intelligence (AI) mannequin that may help the frontline staff in making secure selections, with the hopes that this mannequin will revolutionize their enterprise by enhancing employee effectivity and security. After an costly growth course of, producers unveil their advanced, high-accuracy mannequin to the manufacturing line anticipating to see their funding repay. As an alternative, they see extraordinarily restricted adoption by their staff. What went mistaken?

This hypothetical instance, tailored from a real-world case examine in McKinsey’s The State of AI in 2020, demonstrates the essential position that explainability performs on the earth of AI. Whereas the mannequin within the instance could have been secure and correct, the goal customers didn’t belief the AI system as a result of they didn’t know the way it made selections. Finish-users deserve to grasp the underlying decision-making processes of the methods they’re anticipated to make use of, particularly in high-stakes conditions. Maybe unsurprisingly, McKinsey discovered that enhancing the explainability of methods led to elevated expertise adoption.

Explainable synthetic intelligence (XAI) is a robust device in answering important How? and Why? questions on AI methods and can be utilized to handle rising moral and authorized issues. Consequently, AI researchers have recognized XAI as a vital characteristic of reliable AI, and explainability has skilled a current surge in consideration. Nonetheless, regardless of the rising curiosity in XAI analysis and the demand for explainability throughout disparate domains, XAI nonetheless suffers from numerous limitations. This weblog publish presents an introduction to the present state of XAI, together with the strengths and weaknesses of this observe.
Learn the publish in its entirety.
View an SEI Podcast on this matter.

#1 How Straightforward is it to Make and Detect a Deepfake?
by Catherine A Bernaciak and Dominic Ross

A deepfake is a media file—picture, video, or speech, sometimes representing a human topic—that has been altered deceptively utilizing deep neural networks (DNNs) to change an individual’s identification. This alteration sometimes takes the type of a “faceswap” the place the identification of a supply topic is transferred onto a vacation spot topic. The vacation spot’s facial expressions and head actions stay the identical, however the look within the video is that of the supply. A report revealed this yr estimated that there have been greater than 85,000 dangerous deepfake movies detected as much as December 2020, with the quantity doubling each six months since observations started in December 2018.

Figuring out the authenticity of video content material might be an pressing precedence when a video pertains to national-security issues. Evolutionary enhancements in video-generation strategies are enabling comparatively low-budget adversaries to make use of off-the-shelf machine-learning software program to generate faux content material with rising scale and realism. The Home Intelligence Committee mentioned at size the rising dangers offered by deepfakes in a public listening to on June 13, 2019. On this weblog publish, we describe the expertise underlying the creation and detection of deepfakes and assess present and future menace ranges.

The massive quantity of on-line video presents a chance for the US authorities to reinforce its situational consciousness on a world scale. As of February 2020, Web customers have been importing a mean of 500 hours of latest video content material per minute on YouTube alone. Nonetheless, the existence of a variety of video-manipulation instruments signifies that video found on-line can’t all the time be trusted. What’s extra, as the concept of deepfakes has gained visibility in fashionable media, the press, and social media, a parallel menace has emerged from the so-called liar’s dividend—difficult the authenticity or veracity of respectable info by a false declare that one thing is a deepfake even when it isn’t.
Learn the publish in its entirety.
View the webcast on this work.

Trying Forward in 2023

We publish a brand new publish on the SEI Weblog each Monday morning. Within the coming months, search for posts highlighting the SEI’s work in synthetic intelligence, digital engineering, and edge computing.



Supply hyperlink

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments