Friday, September 15, 2023
HomeIoTCadence Unveils the Neo NPU, NeuroWeave SDK — And Guarantees As much...

Cadence Unveils the Neo NPU, NeuroWeave SDK — And Guarantees As much as 20x the On-System ML Efficiency



Cadence Design Programs has introduced its Neo neural processing unit (NPU) expertise and NeuroWeave software program improvement equipment (SDK), aiming to enhance efficiency and effectivity with on-device machine studying and synthetic intelligence (ML and AI) workloads on the edge.

“For twenty years and with greater than 60 billion processors shipped, industry-leading SoC [System-on-Chip] prospects have relied on Cadence processor IP for his or her edge and on-device SoCs. Our Neo NPUs capitalize on this experience, delivering a leap ahead in AI processing and efficiency,” claims Cadence’s David Glasco.

“In as we speak’s quickly evolving panorama,” Glasco continues, “it is important that our prospects are capable of design and ship AI options primarily based on their distinctive necessities and KPIs [Key Performance Indicators] with out concern about whether or not future neural networks are supported. Towards this finish, we have made important investments in our new AI {hardware} platform and software program toolchain to allow AI at each efficiency, energy and value level and to drive the fast deployment of AI-enabled methods.”

The Neo NPU cores are designed for high-performance machine studying on the edge, scaling from 8 giga-operations per second (GOPS) to 80 tera-operations per second (TOPS) of compute in a single core — and from there to “lots of of TOPS” by built-in a number of cores right into a single design. The corporate claims the design is constructed to assist environment friendly offloading of workloads from any host processor, from utility processors all the way in which right down to microcontrollers and digital sign processors (DSPs), and gives assist for FP16 floating-point and INT16, INT8, and INT4 integer precision.

Drawing a direct comparability to the corporate’s first-generation AI {hardware}, Cadence claims the brand new Neo NPUs can ship “as much as 20X greater efficiency” with between two- and fivefold enhancements in inferences per second per space (IPS/mm²) and between five- and tenfold enhancements in inferences per second per watt (IPS/W). Precise efficiency is configurable relying on necessities, with Cadence claiming the IP could be configured between 256 to 32 multiply-accumulate (MACs) per cycle to stability energy, efficiency, and space necessities.

On the software program aspect, Cadence is supporting the Neo IP with a software program improvement equipment dubbed NeuroWeave. This, the corporate guarantees, gives a “uniform, scalable, and configurable software program stack” throughout each its Tensilica and Neo core IP with assist for a variety of machine studying frameworks together with TensorFlow, TensorFlow Lite, and TensorFlow Lite Micro, ONNX. PyTorch, Caffe2, MXNet, and JAX, in addition to the Android Neural Community Compiler.

Extra info on the Neo NPU IP is obtainable on the Cadence web site; the corporate has mentioned it’s focusing on normal availability in December this 12 months, with unnamed “lead prospects” having already begun “early engagements.”



Supply hyperlink

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments