Log In

All IPs > Platform Level IP > Processor Core Dependent

Processor Core Dependent Semiconductor IPs

In the realm of semiconductor IP, the Processor Core Dependent category encompasses a variety of intellectual properties specifically designed to enhance and support processor cores. These IPs are tailored to work in harmony with core processors to optimize their performance, adding value by reducing time-to-market and improving efficiency in modern integrated circuits. This category is crucial for the customization and adaptation of processors to meet specific application needs, addressing both performance optimization and system complexity management.

Processor Core Dependent IPs are integral components, typically found in applications that require robust data processing capabilities such as smartphones, tablets, and high-performance computing systems. They can also be implemented in embedded systems for automotive, industrial, and IoT applications, where precision and reliability are paramount. By providing foundational building blocks that are pre-verified and configurable, these semiconductor IPs significantly simplify the integration process within larger digital systems, enabling a seamless enhancement of processor capabilities.

Products in this category may include cache controllers, memory management units, security hardware, and specialized processing units, all designed to complement and extend the functionality of processor cores. These solutions enable system architects to leverage existing processor designs while incorporating cutting-edge features and optimizations tailored to specific application demands. Such customizations can significantly boost the performance, energy efficiency, and functionality of end-user devices, translating into better user experiences and competitive advantages.

In essence, Processor Core Dependent semiconductor IPs represent a strategic approach to processor design, providing a toolkit for customization and optimization. By focusing on interdependencies within processing units, these IPs allow for the creation of specialized solutions that cater to the needs of various industries, ensuring the delivery of high-performance, reliable, and efficient computing solutions. As the demand for sophisticated digital systems continues to grow, the importance of these IPs in maintaining competitive edge cannot be overstated.

All semiconductor IP

Metis AIPU PCIe AI Accelerator Card

The Metis AIPU PCIe AI Accelerator Card is engineered for developers demanding superior AI performance. With its quad-core Metis AIPU, this card delivers up to 214 TOPS, tackling challenging vision applications with unmatched efficiency. The PCIe card is designed with user-friendly integration in mind, featuring the Voyager SDK software stack that accelerates application deployment. Offering impressive processing speeds, the card supports up to 3,200 FPS for ResNet-50 models, providing a competitive edge for demanding AI tasks. Its design ensures it meets the needs of a wide array of AI applications, allowing for scalability and adaptability in various use cases.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Ethernet, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

Veyron V2 CPU

The Veyron V2 CPU represents Ventana's second-generation RISC-V high-performance processor, designed for cloud, data center, edge, and automotive applications. This processor offers outstanding compute capabilities with its server-class architecture, optimized for handling complex, virtualized, and cloud-native workloads efficiently. The Veyron V2 is available as both IP for custom SoCs and as a complete silicon platform, ensuring flexibility for integration into various technological infrastructures. Emphasizing a modern architectural design, it includes full compliance with RISC-V RVA23 specifications, showcasing features like high Instruction Per Clock (IPC) and power-efficient architectures. Comprising of multiple core clusters, this CPU is capable of delivering superior AI and machine learning performance, significantly boosting throughput and energy efficiency. The Veyron V2's advanced fabric interconnects and extensive cache architecture provide the necessary infrastructure for high-performance applications, ensuring broad market adoption and versatile deployment options.

Ventana Micro Systems
AI Processor, Audio Processor, CPU, DSP Core, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Chimera GPNPU

The Quadric Chimera General Purpose Neural Processing Unit (GPNPU) delivers unparalleled performance for AI workloads, characterized by its ability to handle diverse and complex tasks without requiring separate processors for different operations. Designed to unify AI inference and traditional computing processes, the GPNPU supports matrix, vector, and scalar tasks within a single, cohesive execution pipeline. This design not only simplifies the integration of AI capabilities into system-on-chip (SoC) architectures but also significantly boosts developer productivity by allowing them to focus on optimizing rather than partitioning code. The Chimera GPNPU is highly scalable, supporting a wide range of operations across all market segments, including automotive applications with its ASIL-ready versions. With a performance range from 1 to 864 TOPS, it excels in running the latest AI models, such as vision transformers and large language models, alongside classic network backbones. This flexibility ensures that devices powered by Chimera GPNPU can adapt to advancing AI trends, making them suitable for applications that require both immediate performance and long-term capability. A key feature of the Chimera GPNPU is its fully programmable nature, making it a future-proof solution for deploying cutting-edge AI models. Unlike traditional NPUs that rely on hardwired operations, the Chimera GPNPU uses a software-driven approach with its source RTL form, making it a versatile option for inference in mobile, automotive, and edge computing applications. This programmability allows for easy updating and adaptation to new AI model operators, maximizing the lifespan and relevance of chips that utilize this technology.

Quadric
15 Categories
View Details

xcore.ai

The xcore.ai platform by XMOS is a versatile, high-performance microcontroller designed for the integration of AI, DSP, and real-time I/O processing. Focusing on bringing intelligence to the edge, this platform facilitates the construction of entire DSP systems using software without the need for multiple discrete chips. Its architecture is optimized for low-latency operation, making it suitable for diverse applications from consumer electronics to industrial automation. This platform offers a robust set of features conducive to sophisticated computational tasks, including support for AI workloads and enhanced control logic. The xcore.ai platform streamlines development processes by providing a cohesive environment that blends DSP capabilities with AI processing, enabling developers to realize complex applications with greater efficiency. By doing so, it reduces the complexity typically associated with chip integration in advanced systems. Designed for flexibility, xcore.ai supports a wide array of applications across various markets. Its ability to handle audio, voice, and general-purpose processing makes it an essential building block for smart consumer devices, industrial control systems, and AI-powered solutions. Coupled with comprehensive software support and development tools, the xcore.ai ensures a seamless integration path for developers aiming to push the boundaries of AI-enabled technologies.

XMOS Semiconductor
21 Categories
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module is designed for devices that require high-performance AI inference in a compact form factor. Powered by a quad-core Metis AI Processing Unit (AIPU), this module optimizes power consumption and integration, making it ideal for AI-driven applications. With a dedicated memory of 1 GB DRAM, it enhances the capabilities of vision processing systems, providing significant boosts in performance for devices with Next Generation Form Factor (NGFF) M.2 sockets. Ideal for use in computer vision systems and more, it offers hassle-free integration and evaluation with Axelera's Voyager SDK. This accelerator module is tailored for any application seeking to harness the power of AI processing efficiently. The Metis AIPU M.2 Module streamlines the deployment of AI applications, ensuring high performance with reduced power consumption.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

aiWare

The aiWare NPU (Neural Processing Unit) by aiMotive is a high-performance hardware solution tailored specifically for automotive AI applications. It is engineered to accelerate inference tasks for autonomous driving systems, ensuring excellent performance across a variety of neural network workloads. aiWare delivers significant flexibility and efficiency, capable of scaling from basic Level 2 applications to complex multi-sensor Level 3+ systems. Achieving up to 98% efficiency, aiWare's design focuses on minimizing power utilization while maximizing core performance. It supports a broad spectrum of neural network architectures, including convolutional neural networks, transformers, and recurrent networks, making it suitable for diverse AI tasks in the automotive sphere. The NPU's architecture allows for minimal external memory access, thanks to its highly efficient dataflow design that capitalizes on on-chip memory caching. With a robust toolkit known as aiWare Studio, engineers can efficiently optimize neural networks without in-depth knowledge of low-level programming, streamlining development and integration efforts. The aiWare hardware is also compatible with V2X communication and advanced driver assistance systems, adapting to various operational needs with great dexterity. Its comprehensive support for automotive safety standards further cements its reputation as a reliable choice for integrating artificial intelligence into next-generation vehicles.

aiMotive
11 Categories
View Details

SAKURA-II AI Accelerator

The SAKURA-II is a cutting-edge AI accelerator that combines high performance with low power consumption, designed to efficiently handle multi-billion parameter models for generative AI applications. It is particularly suited for tasks that demand real-time AI inferencing with minimal batch processing, making it ideal for applications devoted to edge environments. With a typical power usage of 8 watts and a compact footprint, the SAKURA-II achieves more than twice the AI compute efficiency of comparable solutions. This AI accelerator supports next-generation applications by providing up to 4x more DRAM bandwidth compared to alternatives, crucial for the processing of complex vision tasks and large language models (LLMs). The hardware offers advanced precision through software-enabled mixed-precision, which achieves near FP32 accuracy, while a unique sparse computing feature optimizes memory usage. Its robust memory architecture backs up to 32GB of DRAM, providing ample capacity for intensive AI workloads. The SAKURA-II's modular design allows it to be used in multiple form factors, addressing the diverse needs of modern computing tasks such as those found in smart cities, autonomous robotics, and smart manufacturing. Its adaptability is further enhanced by runtime configurable data paths, allowing the device to optimize task scheduling and resource allocation dynamically. These features are powered by the Dynamic Neural Accelerator engine, ensuring efficient computation and energy management.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

A25

The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.

Andes Technology
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores, Standard cell
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDRIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator is a high-performance AI processor developed to meet the complex demands of artificial intelligence workloads. This accelerator is engineered with cutting-edge AI processing capabilities, enabling rapid data analysis and machine learning model inference. Designed for flexibility, the Hanguang 800 delivers superior computation speed and energy efficiency, making it an optimal choice for AI applications in a variety of sectors, from data centers to edge computing. By supporting high-volume data throughput, it enables organizations to achieve significant advantages in speed and efficiency, facilitating the deployment of intelligent solutions.

T-Head Semiconductor
AI Processor, CPU, IoT Processor, Processor Core Dependent, Security Processor, Vision Processor
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

NuLink Die-to-Die PHY for Standard Packaging

Eliyan's NuLink Die-to-Die (D2D) PHY products are designed to provide high-performance, low-power connectivity between chips, or 'chiplets,' in a system. Using standard organic laminate packaging, these IP cores maintain power and performance levels that would traditionally require advanced packaging techniques like silicon interposers. This eliminates the need for such technology, allowing cost-effective system design and reducing thermal, test, and production challenges while maintaining performance. Eliyan’s approach enables flexibility, allowing a broad substrate area that supports more chiplets in the package, significantly boosting performance and power metrics. These D2D PHY cores accommodate various industry standards, including UCIe and BoW, providing configurations tailor-made for optimal bump map layout, thus enhancing overall system efficiency.

Eliyan
Intel Foundry, TSMC
7nm
AMBA AHB / APB/ AXI, CXL, D2D, MIPI, Network on Chip, Processor Core Dependent, V-by-One
View Details

AndesCore Processors

AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.

Andes Technology
CPU, FlexRay, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

NMP-750

The NMP-750 is AiM Future's powerful edge computing accelerator designed specifically for high-performance tasks. With up to 16 TOPS of computational throughput, this accelerator is perfect for automotive, AMRs, UAVs, as well as AR/VR applications. Fitted with up to 16 MB of local memory and featuring RISC-V or Arm Cortex-R/A 32-bit CPUs, it supports diverse data processing requirements crucial for modern technological solutions. The versatility of the NMP-750 is displayed in its ability to manage complex processes such as multi-camera stream processing and spectral efficiency management. It is also an apt choice for applications that require energy management and building automation, demonstrating exceptional potential in smart city and industrial setups. With its robust architecture, the NMP-750 ensures seamless integration into systems that need to handle large data volumes and support high-speed data transmission. This makes it ideal for applications in telecommunications and security where infrastructure resilience is paramount.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details

Azurite Core-hub

The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.

InCore Semiconductors
AI Processor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

RISC-V CPU IP N Class

The N Class RISC-V CPU IP from Nuclei is tailored for applications where space efficiency and power conservation are paramount. It features a 32-bit architecture and is highly suited for microcontroller applications within the AIoT realm. The N Class processors are crafted to provide robust processing capabilities while maintaining a minimal footprint, making them ideal candidates for devices that require efficient power management and secure operations. By adhering to the open RISC-V standard, Nuclei ensures that these processors can be seamlessly integrated into various solutions, offering customizable options to fit specific system requirements.

Nuclei System Technology
Building Blocks, CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores
View Details

RAIV General Purpose GPU

RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, Wireless Processor
View Details

Codasip RISC-V BK Core Series

The Codasip RISC-V BK Core Series is designed to offer flexible and high-performance core options catering to a wide range of applications, from low-power tasks to intricate computational needs. This series achieves optimal balance in power consumption and processing speed, making it suitable for applications demanding energy efficiency without compromising performance. These cores are fully RISC-V compliant, allowing for easy customizations to suit specific needs by modifying the processor's architecture or instruction set through Codasip Studio. The BK Core Series provides a streamlining process for developing precise computing solutions, ideal for IoT edge devices and sensor controllers where both small area and low power are critical. Moreover, the BK Core Series supports architectural exploration, enabling users to optimize the core design specifically tailored to their applications. This capability ensures that each core delivers the expected power, efficiency, and performance metrics required by modern technological solutions.

Codasip
AI Processor, Building Blocks, CPU, DSP Core, IoT Processor, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

General Purpose Accelerator (Aptos)

The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.

Ascenium
TSMC
10nm, 12nm
CPU, Processor Core Dependent, Processor Core Independent, Processor Cores, Standard cell
View Details

Portable RISC-V Cores

Bluespec's Portable RISC-V Cores are designed to bring flexibility and extended functionality to FPGA platforms such as Achronix, Xilinx, Lattice, and Microsemi. They offer support for operating systems like Linux and FreeRTOS, making them versatile for various applications. These cores are accompanied by standard open-source development tools, which facilitate seamless integration and development processes. By utilizing these tools, developers can modify and enhance the cores to suit their specific needs, ensuring a custom fit for their projects. The portable cores are an excellent choice for developers looking to deploy RISC-V architecture across different FPGA platforms without being tied down to proprietary solutions. With Bluespec's focus on open-source, users can experience freedom in innovation and development without sacrificing performance or compatibility.

Bluespec
AMBA AHB / APB/ AXI, CPU, IoT Processor, Peripheral Controller, Processor Core Dependent, Safe Ethernet
View Details

Maverick-2 Intelligent Compute Accelerator

The Maverick-2 Intelligent Compute Accelerator (ICA) by Next Silicon represents a transformative leap in high-performance compute architecture. It seamlessly integrates into HPC systems with a pioneering software-defined approach that dynamically optimizes hardware configurations based on real-time application demands. This enables high efficiency and unparalleled performance across diverse workloads including HPC, AI, and other data-intensive applications. Maverick-2 harnesses a 5nm process technology, utilizing HBM3E memory for enhanced data throughput and efficient energy usage.\n\nBuilt with developers in mind, Maverick-2 supports an array of programming languages such as C/C++, FORTRAN, and OpenMP without the necessity for proprietary stacks. This flexibility not only mitigates porting challenges but significantly reduces development time and costs. A distinguishing feature of Maverick-2 is its real-time telemetry capabilities that provide valuable insights into performance metrics, allowing for refined optimizations during execution.\n\nThe architecture supports versatile interfaces such as PCIe Gen 5 and offers configurations that accommodate complex workloads using either single or dual-die setups. Its intelligent algorithms autonomously identify computational bottlenecks to enhance throughput and scalability, thus future-proofing investments as computing demands evolve. Maverick-2's utility spans various sectors including life sciences, energy, and fintech, underlining its adaptability and high-performance capabilities.

Next Silicon Ltd.
TSMC
28nm
11 Categories
View Details

NMP-350

The NMP-350 is an endpoint accelerator designed to deliver the lowest power and cost efficiency in its class. Ideal for applications such as driver authentication and health monitoring, it excels in automotive, AIoT/sensors, and wearable markets. The NMP-350 offers up to 1 TOPS performance with 1 MB of local memory, and is equipped with a RISC-V or Arm Cortex-M 32-bit CPU. It supports multiple use-cases, providing exceptional value for integrating AI capabilities into various devices. NMP-350's architectural design ensures optimal energy consumption, making it particularly suited to Industry 4.0 applications where predictive maintenance is crucial. Its compact nature allows for seamless integration into systems requiring minimal footprint yet substantial computational power. With support for multiple data inputs through AXI4 interfaces, this accelerator facilitates enhanced machine automation and intelligent data processing. This product is a testament to AiM Future's expertise in creating efficient AI solutions, providing the building blocks for smart devices that need to manage resources effectively. The combination of high performance with low energy requirements makes it a go-to choice for developers in the field of AI-enabled consumer technology.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

SCR9 Processor Core

Syntacore’s SCR9 processor core stands out as a powerful force in handling high-performance computing tasks with its dual-issue out-of-order 12-stage pipeline. This core is engineered for environments that demand peak computational ability and robust pipeline execution, crucial for data-intense tasks such as AI and ML, enterprise applications, and network processing. The architecture is tailored to support extensive multicore and heterogeneous configurations, providing valuable tools for developers aiming to maximize workload efficiency and processing speed. The inclusion of a vector processing unit (VPU) underscores its capability to handle large datasets and complex calculations, while maintaining system integrity and coherence through its comprehensive cache management. With support for hypervisor functionalities and scalable Linux environments, the SCR9 continues to be a key strategic element in expanding the horizons of RISC-V-based applications. Syntacore’s extensive library of development resources further enriches the usability of this core, ensuring that its implementation remains smooth and effective across diverse technological landscapes.

Syntacore
2D / 3D, AI Processor, Coprocessor, CPU, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

AI Inference Platform

Designed to cater to AI-specific needs, SEMIFIVE’s AI Inference Platform provides tailored solutions that seamlessly integrate advanced technologies to optimize performance and efficiency. This platform is engineered to handle the rigorous demands of AI workloads through a well-integrated approach combining hardware and software innovations matched with AI acceleration features. The platform supports scalable AI models, delivering exceptional processing capabilities for tasks involving neural network inference. With a focus on maximizing throughput and efficiency, it facilitates real-time processing and decision-making, which is crucial for applications such as machine learning and data analytics. SEMIFIVE’s platform simplifies AI implementation by providing an extensive suite of development tools and libraries that accelerate design cycles and enhance comprehensive system performance. The incorporation of state-of-the-art caching mechanisms and optimized data flow ensures the platform’s ability to handle large datasets efficiently.

SEMIFIVE
Samsung
5nm, 12nm, 14nm
AI Processor, Cell / Packet, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Time-Triggered Protocol

The Time-Triggered Protocol (TTP) designed by TTTech is an advanced communication protocol meant to enhance the reliability of data transmission in critical systems. Developed in compliance with the SAE AS6003 standard, this protocol is ideally suited for environments requiring synchronized operations, such as aeronautics and high-stakes energy sectors. TTP allows for precise scheduling of communication tasks, creating a deterministic communication environment where the timing of data exchanges is predictable and stable. This predictability is crucial in eliminating delays and minimizing data loss in safety-critical applications. The protocol lays the groundwork for robust telecom infrastructures in airplanes and offers a high level of system redundancy and fault tolerance. TTTech’s TTP IP core is integral to their TTP-Controller ASICs and is designed to comply with stringent integrity and safety requirements, including those outlined in RTCA DO-254 / EUROCAE ED-80. The versatility of TTP allows it to be implemented across varying FPGA platforms, broadening its applicability to a wide range of safety-critical industrial systems.

TTTech Computertechnik AG
AMBA AHB / APB/ AXI, CAN, CAN XL, CAN-FD, Ethernet, FlexRay, LIN, MIPI, Processor Core Dependent, Safe Ethernet, Temperature Sensor
View Details

NMP-550

Tailored for high efficiency, the NMP-550 accelerator advances performance in the fields of automotive, mobile, AR/VR, and more. Designed with versatility in mind, it finds applications in driver monitoring, video analytics, and security through its robust capabilities. Offering up to 6 TOPS of processing power, it includes up to 6 MB of local memory and a choice of RISC-V or Arm Cortex-M/A 32-bit CPU. In environments like drones, robotics, and medical devices, the NMP-550's enhanced computational skills allow for superior machine learning and AI functions. This is further supported by its ability to handle comprehensive data streams efficiently, making it ideal for tasks such as image analytics and fleet management. The NMP-550 exemplifies how AiM Future harnesses cutting-edge technology to develop powerful processors that meet contemporary demands for higher performance and integration into a multitude of smart technologies.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Receiver/Transmitter
View Details

AndeShape Platforms

The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.

Andes Technology
Embedded Memories, Microcontroller, Processor Core Dependent, Processor Core Independent, Standard cell
View Details

ISPido on VIP Board

ISPido on the VIP Board is tailored for Lattice Semiconductors' Video Interface Platform, providing a runtime solution optimized for delivering crisp, balanced images in real-time. This solution offers two primary configurations: automatic deployment for optimal settings instantly upon startup, and a manual, menu-driven interface allowing users to fine-tune settings such as gamma tables and convolution filters. Utilizing the CrossLink VIP Input Bridge with Sony IMX 214 sensors and an ECP5-85 FPGA, it provides HD output in HDMI YCrCb format, ensuring high-quality image resolution and real-time calibration.

DPControl
19 Categories
View Details

Zhenyue 510 SSD Controller

The Zhenyue 510 SSD Controller represents a pivotal advancement in solid-state drive technology, tailored to meet the rigorous demands of enterprise-grade storage solutions. It leverages state-of-the-art technology to deliver exceptional data throughput and reliability, ensuring swift data access and enhanced storage efficiency. This controller is engineered to minimize latency, making it highly suitable for environments where data speed and reliability are crucial, such as cloud computing and enterprise data centers. With the ability to handle large volumes of data effortlessly, the Zhenyue 510 SSD Controller sets new benchmarks for performance and energy efficiency in storage solutions.

T-Head Semiconductor
eMMC, Flash Controller, NAND Flash, NVM Express, ONFI Controller, Processor Core Dependent, RLDRAM Controller, SAS, SATA, SDRAM Controller, SRAM Controller
View Details

RISCV SoC - Quad Core Server Class

The RISCV SoC developed by Dyumnin Semiconductors is engineered with a 64-bit quad-core server-class RISCV CPU, aiming to bridge various application needs with an integrated, holistic system design. Each subsystem of this SoC, from AI/ML capabilities to automotive and multimedia functionalities, is constructed to deliver optimal performance and streamlined operations. Designed as a reference model, this SoC enables quick adaptation and deployment, significantly reducing the time-to-market for clients. The AI Accelerator subsystem enhances AI operations with its collaboration of a custom central processing unit, intertwined with a specialized tensor flow unit. In the multimedia domain, the SoC boasts integration capabilities for HDMI, Display Port, MIPI, and other advanced graphic and audio technologies, ensuring versatile application across various multimedia requirements. Memory handling is another strength of this SoC, with support for protocols ranging from DDR and MMC to more advanced interfaces like ONFI and SD/SDIO, ensuring seamless connectivity with a wide array of memory modules. Moreover, the communication subsystem encompasses a broad spectrum of connectivity protocols, including PCIe, Ethernet, USB, and SPI, crafting an all-rounded solution for modern communication challenges. The automotive subsystem, offering CAN and CAN-FD protocols, further extends its utility into automotive connectivity.

Dyumnin Semiconductors
28 Categories
View Details

Codasip L-Series DSP Core

The Codasip L-Series DSP Core is tailored for applications that require efficient digital signal processing capabilities. Known for its adaptability and top-notch performance, this core is a prime choice for tasks demanding real-time processing and a high level of computational density. The L-Series embraces the RISC-V open standard with an enhanced design that allows for customizing the instruction set and leveraging unique microarchitectural features. Such adaptability is ideal for applications in industries where digital signal manipulation is critical, such as audio processing, telecommunications, and advanced sensor applications. Users are empowered through Codasip Studio to implement specific enhancements and modifications, aligning the core's capabilities with specialized operational requirements. This core not only promises high-speed processing but also ensures that resource allocation is optimized for each specific digital processing task.

Codasip
AI Processor, Audio Processor, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent
View Details

Veyron V1 CPU

The Veyron V1 is a high-performance RISC-V CPU designed to meet the rigorous demands of modern data centers and compute-intensive applications. This processor is tailored for cloud environments requiring extensive compute capabilities, offering substantial power efficiency while optimizing processing workloads. It provides comprehensive architectural support for virtualization and efficient task management with its robust feature set. Incorporating advanced RISC-V standards, the Veyron V1 ensures compatibility and scalability across a wide range of industries, from enterprise servers to high-performance embedded systems. Its architecture is engineered to offer seamless integration, providing an excellent foundation for robust, scalable computing designs. Equipped with state-of-the-art processing cores and enhanced vector acceleration, the Veyron V1 delivers unmatched throughput and performance management, making it suitable for use in diverse computing environments.

Ventana Micro Systems
AI Processor, Audio Processor, Coprocessor, CPU, DSP Core, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

iCan PicoPop® System on Module

The iCan PicoPop® is a highly compact System on Module (SOM) based on the Zynq UltraScale+ MPSoC from Xilinx, suited for high-performance embedded applications in aerospace. Known for its advanced signal processing capabilities, it is particularly effective in video processing contexts, offering efficient data handling and throughput. Its compact size and performance make it ideal for integration into sophisticated systems where space and performance are critical.

OXYTRONIC
12 Categories
View Details

aiData

aiData is designed to streamline the data pipeline for developing models for Advanced Driver-Assistance Systems and Automated Driving solutions. This automated system provides a comprehensive method of managing and processing data, from collection through curation, annotation, and validation. It significantly reduces the time required for data processing by automating many labor-intensive tasks, enabling teams to focus more on development rather than data preparation. The aiData platform includes sophisticated tools for recording, managing, and annotating data, ensuring accuracy and traceability through all stages of the MLOps workflow. It supports the creation of high-quality training datasets, essential for developing reliable and effective AI models. The platform's capabilities extend beyond basic data processing by offering advanced features such as versioning and metrics analysis, allowing users to track data changes over time and evaluate dataset quality before training. The aiData Recorder feature ensures high-quality data collection tailored to diverse sensor configurations, while the Auto Annotator quickly processes data for a variety of objects using AI algorithms, delivering superior precision levels. These features are complemented by aiData Metrics, which provide valuable insights into dataset completeness and adequacy in covering expected operational domains. With seamless on-premise or cloud deployment options, aiData empowers global automotive teams to collaborate efficiently, offering all necessary tools for a complete data management lifecycle. Its integration versatility supports a wide array of applications, helping improve the speed and effectiveness of deploying ADAS models.

aiMotive
AI Processor, AMBA AHB / APB/ AXI, Audio Interfaces, Content Protection Software, Digital Video Broadcast, Embedded Memories, H.264, Processor Core Dependent, Vision Processor
View Details

ISPido

ISPido offers a comprehensive set of IP cores focused on high-resolution image signal processing and tuning across multiple devices and platforms, including CPU, GPU, VPU, FPGA, and ASIC technologies. Its flexibility is a standout feature, accommodating ultra-low power devices as well as systems exceeding 8K resolution. Designed for devices where power efficiency and high-quality image processing are paramount, ISPido adapts to a range of hardware architectures to deliver optimal image quality and processing capabilities. The IP has been widely adopted in various applications, making it a cornerstone for industries requiring advanced image calibration and processing capabilities.

DPControl
22 Categories
View Details

RISC-V CPU IP NA Class

Specially engineered for the automotive industry, the NA Class IP by Nuclei complies with the stringent ISO26262 functional safety standards. This processor is crafted to handle complex automotive applications, offering flexibility and rigorous safety protocols necessary for mission-critical transportation technologies. Incorporating a range of functional safety features, the NA Class IP is equipped to ensure not only performance but also reliability and safety in high-stakes vehicular environments.

Nuclei System Technology
AI Processor, CAN-FD, CPU, Cryptography Cores, FlexRay, Microcontroller, Platform Security, Processor Core Dependent, Processor Cores, Security Processor, Vision Processor
View Details

TUNGA

TUNGA is an advanced System on Chip (SoC) leveraging the strengths of Posit arithmetic for accelerated High-Performance Computing (HPC) and Artificial Intelligence (AI) tasks. The TUNGA SoC integrates multiple CRISP-cores, employing Posit as a core technology for real-number calculations. This multi-core RISC-V SoC is uniquely equipped with a fixed-point accumulator known as QUIRE, which allows for extremely precise computations across vectors as long as 2 billion entries. The TUNGA SoC includes programmable FPGA gates for enhancing field-critical functions. These gates are instrumental in speeding up data center services, offloading tasks from the CPU, and advancing AI training and inference efficiency using non-standard data types. TUNGA's architecture is tailored for applications demanding high precision, including cryptography and variable precision computing tasks, facilitating the transition towards next-generation arithmetic. In the computational ecology, TUNGA stands out by offering customizable features and rapid processing capabilities, making it suitable not only for typical data center functions but also for complex, precision-demanding workloads. By capitalizing on Posit arithmetic, TUNGA aims to deliver more efficient and powerful computational performance, reflecting a strategic advancement in handling complex data-oriented processes.

Calligo Technologies
AI Processor, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details

Prodigy Universal Processor

Tachyum's Prodigy Universal Processor marks a significant milestone as it combines the functionalities of Central Processing Units (CPUs), General-Purpose Graphics Processing Units (GPGPUs), and Tensor Processing Units (TPUs) into a single cohesive architecture. This groundbreaking design is tailored to meet the escalating demands of artificial intelligence, high-performance computing, and hyperscale data centers by offering unparalleled performance, energy efficiency, and high utilization rates. The Prodigy processor not only tackles common data center challenges like elevated power consumption and stagnating processor performance but also offers a robust solution to enhance server utilization and reduce the carbon footprint of massive computational installations. Notably, it thrives on a simplified programming model grounded in coherent multiprocessor architecture, thereby enabling seamless execution of an array of AI disciplines like Explainable AI, Bio AI, and deep machine learning within a single hardware platform.

Tachyum Inc.
13 Categories
View Details

RISC-V Processor Core

The RISC-V Processor Core developed by Fraunhofer IPMS is a versatile processor designed for the flexible demands of modern digital systems. It adheres to the open RISC-V architecture, ensuring a customizable and extendable computing platform. This processor core is ideal for applications requiring low-power consumption without sacrificing processing power, making it suitable for IoT devices and embedded systems. Built with a focus on energy efficiency and speed, the RISC-V core is capable of executing complex operations at rapid speeds, making it a reliable choice for time-sensitive tasks and high-performance computations. It supports a wide range of data processing capabilities, delivering optimized performance for various applications, from consumer electronics to advanced automotive systems. With its open-source foundation, this processor core allows for extensive customization, fostering innovation and adaptability in design processes. By supporting seamless integration into various system architectures, the RISC-V core ensures compatibility and scalability, crucial for modern technological advancements.

Fraunhofer Institute for Photonic Microsystems (IPMS)
CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Spectral CustomIP

Spectral CustomIP encompasses an expansive suite of specialized memory architectures, tailored for diverse integrated circuit applications. Known for breadth in memory compiler designs, Spectral offers solutions like Binary and Ternary CAMs, various Multi-Ported memories, Low Voltage SRAMs, and advanced cache configurations. These bespoke designs integrate either foundry-standard or custom-designed bit cells providing robust performance across varied operational scenarios. The CustomIP products are engineered for low dynamic power usage and high density, utilizing Spectral’s Memory Development Platform. Available in source code form, these solutions offer users the flexibility to modify designs, adapt them for new technologies, or extend capabilities—facilitating seamless integration within standard CMOS processes or more advanced SOI and embedded Flash processes. Spectral's proprietary SpectralTrak technology enhances CustomIP with precise environmental monitoring, ensuring operational integrity through real-time Process, Voltage, and Temperature adjustments. With options like advanced compiler features, multi-banked architectures, and standalone or compiler instances, Spectral CustomIP suits businesses striving to distinguish their IC offerings with unique, high-performance memory solutions.

Spectral Design & Test Inc.
GLOBALFOUNDRIES
22nm
Embedded Memories, I/O Library, Processor Core Dependent, SDRAM Controller, Standard cell
View Details

SiFive Essential

The SiFive Essential family provides a comprehensive range of embedded processor cores that can be tailored to various application needs. This series incorporates silicon-proven, pre-defined CPU cores with a focus on scalability and configurability, ranging from simple 32-bit MCUs to advanced 64-bit processors capable of running embedded RTOS and full-fledged operating systems like Linux. SiFive Essential empowers users with the flexibility to customize the design for specific performance, power, and area requirements. The Essential family introduces significant advancements in processing capabilities, allowing users to design processors that meet precise application needs. It features a rich set of options for interface customizations, providing seamless integration into broader SoC designs. Moreover, the family supports an 8-stage pipeline architecture and, in some configurations, offers dual-issue superscalar capabilities for enhanced processing throughput. For applications where security and traceability are crucial, the Essential family includes WorldGuard technology, which ensures comprehensive protection across the entire SoC, safeguarding against unauthorized access. The flexible design opens up various use cases, from IoT devices and microcontrollers to real-time control applications and beyond.

SiFive, Inc.
Building Blocks, Content Protection Software, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Standard cell
View Details

Trifecta-GPU

Trifecta-GPU design offers an exceptional computational power utilizing the NVIDIA RTX A2000 embedded GPU. With a focus on modular test and measurement, and electronic warfare markets, this GPU is capable of delivering 8.3 FP32 TFLOPS compute performance. It is tailored for advanced signal processing and machine learning, making it indispensable for modern, software-defined signal processing applications. This GPU is a part of the COTS PXIe/CPCIe modular family, known for its flexibility and ease of use. The NVIDIA GPU integration means users can expect robust performance for AI inference applications, facilitating quick deployment in various scenarios requiring advanced data processing. Incorporating the latest in graphical performance, the Trifecta-GPU supports a broad range of applications, from high-end computing tasks to graphics-intensive processes. It is particularly beneficial for those needing a reliable and powerful GPU for modular T&M and EW projects.

RADX Technologies, Inc.
AI Processor, CPU, DSP Core, GPU, Multiprocessor / DSP, Peripheral Controller, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

SiFive Performance

The SiFive Performance family is dedicated to offering high-throughput, low-power processor solutions, suitable for a wide array of applications from data centers to consumer devices. This family includes a range of 64-bit, out-of-order cores configured with options for vector computations, making it ideal for tasks that demand significant processing power alongside efficiency. Performance cores provide unmatched energy efficiency while accommodating a breadth of workload requirements. Their architecture supports up to six-wide out-of-order processing with tailored options that include multiple vector engines. These cores are designed for flexibility, enabling various implementations in consumer electronics, network storage solutions, and complex multimedia processing. The SiFive Performance family facilitates a mix of high performance and low power usage, allowing users to balance the computational needs with power consumption effectively. It stands as a testament to SiFive’s dedication to enabling flexible tech solutions by offering rigorous processing capabilities in compact, scalable packages.

SiFive, Inc.
CPU, DSP Core, Ethernet, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, Wireless Processor
View Details

IP Platform for Low-Power IoT

The silicon IP Platform for Low-Power IoT by Low Power Futures integrates pre-validated, configurable building blocks tailored for IoT device creation. It provides a turnkey solution to accelerate product development, incorporating options to employ both ARM and RISC V processors. With a focus on reducing energy consumption, the platform is prepared for various applications, ensuring a seamless transition for products from conception to market. The platform is crucial for developing smart IoT solutions that require secure and reliable wireless communications across industries like healthcare, smart home, and industrial automation.

Low Power Futures
12 Categories
View Details

RISC-V CPU IP NX Class

The NX Class RISC-V CPU IP by Nuclei is characterized by its 64-bit architecture, making it a robust choice for storage, AR/VR, and AI applications. This processing unit is designed to accommodate high data throughput and demanding computational tasks. By leveraging advanced capabilities, such as virtual memory and enhanced processing power, the NX Class facilitates cutting-edge technological applications and is adaptable for integration into a vast array of high-performance systems.

Nuclei System Technology
Building Blocks, CPU, DSP Core, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Vision Processor, Wireless Processor
View Details

RISC-V CPU IP NI Class

The NI Class RISC-V CPU IP caters to communication, video processing, and AI applications, providing a balanced architecture for intensive data handling and processing capabilities. With a focus on high efficiency and flexibility, this processor supports advanced data crunching and networking applications, ensuring that systems run smoothly and efficiently even when managing complex algorithms. The NI Class upholds Nuclei's commitment to providing versatile solutions in the evolving tech landscape.

Nuclei System Technology
3GPP-LTE, AI Processor, CPU, Cryptography Cores, Microcontroller, Processor Core Dependent, Processor Cores, Security Processor, Vision Processor
View Details

ARM M-Class Based ASICs

Designed for integration within various industry systems, ARM M-Class based ASICs from ASIC North provide flexibility and versatility. These chips, built around the ARM Cortex-M architecture, are optimized for embedded applications, offering a balance of performance and power efficiency. They are particularly suited to IoT applications due to their robust performance metrics and modular design adaptability. ASIC North ensures that each ARM M-Class ASIC is thoroughly verified, delivering optimal reliability for deployment in complex environments.

ASIC North
AI Processor, Building Blocks, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Origin E1

The Origin E1 is a streamlined neural processing unit designed specifically for always-on applications in personal electronics and smart devices such as smartphones and security systems. This processor focuses on delivering highly efficient AI performance, achieving around 18 TOPS per watt. With its low power requirements, the E1 is ideally suited for tasks demanding continuous data sampling, such as camera operations in smart surveillance systems where it runs on less than 20mW of power. Its packet-based architecture ensures efficient resource utilization, maintaining high performance with lower power and area consumption. The E1's adaptability is enhanced through customizable options, allowing it to meet specific PPA requirements effectively, making it the go-to choice for applications seeking to improve user privacy and experience by minimizing external memory use.

Expedera
14 Categories
View Details

Neural Network Accelerator

The Neural Network Accelerator by Gyrus AI is an advanced compute solution specially optimized for neural network applications. It features native graph processing capabilities that significantly enhance the computational efficiency of AI models. This IP component supports high-speed operations with 30 TOPS/W, offering exceptional performance that significantly reduces the clock cycles typically required by other systems.\n\nMoreover, the architecture is designed to consume 10-20 times less power, benefitting from a low-memory usage configuration. This efficiency is further highlighted by the IP’s ability to achieve an 80% utilization rate across various model structures, which translates into significant reductions in die area requirements up to 8 to 10 times smaller than conventional designs.\n\nGyrus AI’s Neural Network Accelerator also supports seamless integration of software tools tailored to run neural networks on the platform, making it a practical choice for edge computing applications. It not only supports large-scale AI computations but also minimizes power consumption and space constraints, making it ideal for high-performance environments.

Gyrus AI
AI Processor, Coprocessor, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor, Vision Processor
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

No credit card or payment details required.

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt