Log In

All IPs > Processor

Processor Semiconductor IPs

The 'Processor' category in the Silicon Hub Semiconductor IP catalog is a cornerstone of modern electronic device design. Processor semiconductor IPs serve as the brain of electronic devices, driving operations, processing data, and performing complex computations essential for a multitude of applications. These IPs include a wide variety of specific types such as CPUs, DSP cores, and microcontrollers, each designed with unique capabilities and applications in mind.

In this category, you'll find building blocks, which are fundamental components for constructing more sophisticated processors, and coprocessors that augment the capabilities of a main processor, enabling efficient handling of specialized tasks. The versatility of processor semiconductor IPs is evident in subcategories like AI processors, audio processors, and vision processors, each tailored to meet the demands of today’s smart technologies. These processors are central to developing innovative products that leverage artificial intelligence, enhance audio experiences, and enable complex image processing capabilities, respectively.

Moreover, there are security processors that empower devices with robust security features to protect sensitive data and communications, as well as IoT processors and wireless processors that drive connectivity and integration of devices within the Internet of Things ecosystem. These processors ensure reliable and efficient data processing in increasingly connected and smart environments.

Overall, the processor semiconductor IP category is pivotal for enabling the creation of advanced electronic devices across a wide range of industries, from consumer electronics to automotive systems, providing the essential processing capabilities needed to meet the ever-evolving technological demands of today's world. Whether you're looking for individual processor cores or fully integrated processing solutions, this category offers a comprehensive selection to support any design or application requirement.

All semiconductor IP
Processor
A/D Converter Amplifier Analog Comparator Analog Filter Analog Front Ends Analog Multiplexer Analog Subsystems Coder/Decoder D/A Converter DC-DC Converter DLL Graphics & Video Modules Oversampling Modulator Photonics PLL Power Management RF Modules Sensor Switched Cap Filter Temperature Sensor CAN CAN XL CAN-FD FlexRay LIN Other Safe Ethernet Arbiter Audio Controller Clock Generator CRT Controller DMA Controller GPU Input/Output Controller Interrupt Controller LCD Controller Other Peripheral Controller Receiver/Transmitter Timer/Watchdog VME Controller AMBA AHB / APB/ AXI CXL D2D Gen-Z HDMI I2C IEEE 1394 IEEE1588 Interlaken MIL-STD-1553 MIPI Multi-Protocol PHY PCI PowerPC RapidIO SAS SATA USB V-by-One VESA Embedded Memories I/O Library Other Standard cell DDR eMMC Flash Controller HBM Mobile DDR Controller Mobile SDR Controller NAND Flash NVM Express ONFI Controller Other RLDRAM Controller SD SDIO Controller SDRAM Controller SRAM Controller 2D / 3D ADPCM Audio Interfaces AV1 Camera Interface CSC DVB H.263 H.264 H.265 Image Conversion JPEG JPEG 2000 MPEG / MPEG2 MPEG 4 QOI TICO VGA WMA WMV Network on Chip Multiprocessor / DSP Processor Core Dependent Processor Core Independent AI Processor Audio Processor Building Blocks Coprocessor CPU DSP Core IoT Processor Microcontroller Other Processor Cores Security Processor Vision Processor Wireless Processor Content Protection Software Cryptography Cores Cryptography Software Library Embedded Security Modules Other Platform Security Security Protocol Accelerators Security Subsystems 3GPP-5G 3GPP-LTE 802.11 802.16 / WiMAX Bluetooth CPRI Digital Video Broadcast GPS JESD 204A / JESD 204B NFC OBSAI Other UWB W-CDMA Wireless USB ATM / Utopia CEI Cell / Packet Error Correction/Detection Ethernet Fibre Channel HDLC Interleaver/Deinterleaver Modulation/Demodulation Optical/Telecom Other
Vendor

Akida Neural Processor IP

Akida's Neural Processor IP represents a leap in AI architecture design, tailored to provide exceptional energy efficiency and processing speed for an array of edge computing tasks. At its core, the processor mimics the synaptic activity of the human brain, efficiently executing tasks that demand high-speed computation and minimal power usage. This processor is equipped with configurable neural nodes capable of supporting innovative AI frameworks such as convolutional and fully-connected neural network processes. Each node accommodates a range of MAC operations, enhancing scalability from basic to complex deployment requirements. This scalability enables the development of lightweight AI solutions suited for consumer electronics as well as robust systems for industrial use. Onboard features like event-based processing and low-latency data communication significantly decrease the strain on host processors, enabling faster and more autonomous system responses. Akida's versatile functionality and ability to learn on the fly make it a cornerstone for next-generation technology solutions that aim to blend cognitive computing with practical, real-world applications.

BrainChip
AI Processor, Coprocessor, CPU, Digital Video Broadcast, Network on Chip, Platform Security, Processor Core Independent, Vision Processor
View Details

KL730 AI SoC

The KL730 is a third-generation AI chip that integrates advanced reconfigurable NPU architecture, delivering up to 8 TOPS of computing power. This cutting-edge technology enhances computational efficiency across a range of applications, including CNN and transformer networks, while minimizing DDR bandwidth requirements. The KL730 also boasts enhanced video processing capabilities, supporting 4K 60FPS outputs. With expertise spanning over a decade in ISP technology, the KL730 stands out with its noise reduction, wide dynamic range, fisheye correction, and low-light imaging performance. It caters to markets like intelligent security, autonomous vehicles, video conferencing, and industrial camera systems, among others.

Kneron
TSMC
12nm
16 Categories
View Details

Akida 2nd Generation

The second-generation Akida platform builds upon the foundation of its predecessor with enhanced computational capabilities and increased flexibility for a broader range of AI and machine learning applications. This version supports 8-bit weights and activations in addition to the flexible 4- and 1-bit operations, making it a versatile solution for high-performance AI tasks. Akida 2 introduces support for programmable activation functions and skip connections, further enhancing the efficiency of neural network operations. These capabilities are particularly advantageous for implementing sophisticated machine learning models that require complex, interconnected processing layers. The platform also features support for Spatio-Temporal and Temporal Event-Based Neural Networks, advancing its application in real-time, on-device AI scenarios. Built as a silicon-proven, fully digital neuromorphic solution, Akida 2 is designed to integrate seamlessly with various microcontrollers and application processors. Its highly configurable architecture offers post-silicon flexibility, making it an ideal choice for developers looking to tailor AI processing to specific application needs. Whether for low-latency video processing, real-time sensor data analysis, or interactive voice recognition, Akida 2 provides a robust platform for next-generation AI developments.

BrainChip
11 Categories
View Details

Metis AIPU PCIe AI Accelerator Card

The Metis AIPU PCIe AI Accelerator Card is engineered for developers demanding superior AI performance. With its quad-core Metis AIPU, this card delivers up to 214 TOPS, tackling challenging vision applications with unmatched efficiency. The PCIe card is designed with user-friendly integration in mind, featuring the Voyager SDK software stack that accelerates application deployment. Offering impressive processing speeds, the card supports up to 3,200 FPS for ResNet-50 models, providing a competitive edge for demanding AI tasks. Its design ensures it meets the needs of a wide array of AI applications, allowing for scalability and adaptability in various use cases.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Ethernet, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

1G to 224G SerDes

The 1G to 224G SerDes is a versatile serializer/deserializer technology designed to facilitate high-speed data transfers across various interface standards. It caters to stringent speed requirements by supporting a wide range of data rates and signaling schemes, allowing efficient integration into comprehensive communication systems. This SerDes technology excels in delivering reliable, low-latency connections, making it ideal for hyperscale data centers, AI, and 5G networking where fast, efficient data processing is essential. The broad compatibility with numerous industry protocols also ensures seamless interoperability with existing systems. Adapted for scalability, the 1G to 224G SerDes provides design flexibility, encouraging implementation across a variety of demanding environments. Its sophisticated architecture promotes energy efficiency and robust performance, crucial for addressing the ever-growing connectivity demands of modern technology infrastructures.

Alphawave Semi
GLOBALFOUNDRIES, UMC
7nm, 14nm
AMBA AHB / APB/ AXI, DSP Core, Ethernet, PCI, USB, Wireless Processor
View Details

Akida IP

The Akida IP is a groundbreaking neural processor designed to emulate the cognitive functions of the human brain within a compact and energy-efficient architecture. This processor is specifically built for edge computing applications, providing real-time AI processing for vision, audio, and sensor fusion tasks. The scalable neural fabric, ranging from 1 to 128 nodes, features on-chip learning capabilities, allowing devices to adapt and learn from new data with minimal external inputs, enhancing privacy and security by keeping data processing localized. Akida's unique design supports 4-, 2-, and 1-bit weight and activation operations, maximizing computational efficiency while minimizing power consumption. This flexibility in configuration, combined with a fully digital neuromorphic implementation, ensures a cost-effective and predictable design process. Akida is also equipped with event-based acceleration, drastically reducing the demands on the host CPU by facilitating efficient data handling and processing directly within the sensor network. Additionally, Akida's on-chip learning supports incremental learning techniques like one-shot and few-shot learning, making it ideal for applications that require quick adaptation to new data. These features collectively support a broad spectrum of intelligent computing tasks, including object detection and signal processing, all performed at the edge, thus eliminating the need for constant cloud connectivity.

BrainChip
AI Processor, Audio Processor, Coprocessor, CPU, Cryptography Cores, GPU, Input/Output Controller, IoT Processor, Platform Security, Processor Core Independent, Vision Processor
View Details

Yitian 710 Processor

The Yitian 710 Processor is a groundbreaking component in processor technology, designed with cutting-edge architecture to enhance computational efficiency. This processor is tailored for cloud-native environments, offering robust support for high-demand computing tasks. It is engineered to deliver significant improvements in performance, making it an ideal choice for data centers aiming to optimize their processing power and energy efficiency. With its advanced features, the Yitian 710 stands at the forefront of processor innovation, ensuring seamless integration with diverse technology platforms and enhancing the overall computing experience across industries.

T-Head Semiconductor
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

MetaTF

MetaTF is BrainChip's premier development tool platform designed to complement its neuromorphic technology solutions. This platform is a comprehensive toolkit that empowers developers to convert and optimize standard machine learning models into formats compatible with BrainChip's Akida technology. One of its key advantages is its ability to adjust models into sparse formats, enhancing processing speed and reducing power consumption. The MetaTF framework provides an intuitive interface for integrating BrainChip’s specialized AI capabilities into existing workflows. It supports streamlined adaptation of models to ensure they are optimized for the unique characteristics of neuromorphic processing. Developers can utilize MetaTF to rapidly iterate and refine AI models, making the deployment process smoother and more efficient. By providing direct access to pre-trained models and tuning mechanisms, MetaTF allows developers to capitalize on the benefits of event-based neural processing with minimal configuration effort. This platform is crucial for advancing the application of machine learning across diverse fields such as IoT devices, healthcare technology, and smart infrastructure.

BrainChip
AI Processor, Coprocessor, Processor Core Independent, Vision Processor
View Details

AI Camera Module

The AI Camera Module from Altek is a versatile, high-performance component designed to meet the increasing demand for smart vision solutions. This module features a rich integration of imaging lens design and combines both hardware and software capacities to create a seamless operational experience. Its design is reinforced by Altek's deep collaboration with leading global brands, ensuring a top-tier product capable of handling diverse market requirements. Equipped to cater to AI and IoT interplays, the module delivers outstanding capabilities that align with the expectations for high-resolution imaging, making it suitable for edge computing applications. The AI Camera Module ensures that end-user diversity is meaningfully addressed, offering customization in device functionality which supports advanced processing requirements such as 2K and 4K video quality. This module showcases Altek's prowess in providing comprehensive, all-in-one camera solutions which leverage sophisticated imaging and rapid processing to handle challenging conditions and demands. The AI Camera's technical blueprint supports complex AI algorithms, enhancing not just image quality but also the device's interactive capacity through facial recognition and image tracking technology.

Altek Corporation
Samsung
22nm
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Audio Interfaces, GPU, Image Conversion, IoT Processor, JPEG, Receiver/Transmitter, SATA, Vision Processor
View Details

Speedcore Embedded FPGA IP

Speedcore embedded FPGA (eFPGA) IP represents a notable advancement in integrating programmable logic into ASICs and SoCs. Unlike standalone FPGAs, eFPGA IP lets designers tailor the exact dimensions of logic, DSP, and memory needed for their applications, making it an ideal choice for areas like AI, ML, 5G wireless, and more. Speedcore eFPGA can significantly reduce system costs, power requirements, and board space while maintaining flexibility by embedding only the necessary features into production. This IP is programmable using the same Achronix Tool Suite employed for standalone FPGAs. The Speedcore design process is supported by comprehensive resources and guidance, ensuring efficient integration into various semiconductor projects.

Achronix
TSMC
All Process Nodes
Processor Cores
View Details

Veyron V2 CPU

The Veyron V2 CPU represents Ventana's second-generation RISC-V high-performance processor, designed for cloud, data center, edge, and automotive applications. This processor offers outstanding compute capabilities with its server-class architecture, optimized for handling complex, virtualized, and cloud-native workloads efficiently. The Veyron V2 is available as both IP for custom SoCs and as a complete silicon platform, ensuring flexibility for integration into various technological infrastructures. Emphasizing a modern architectural design, it includes full compliance with RISC-V RVA23 specifications, showcasing features like high Instruction Per Clock (IPC) and power-efficient architectures. Comprising of multiple core clusters, this CPU is capable of delivering superior AI and machine learning performance, significantly boosting throughput and energy efficiency. The Veyron V2's advanced fabric interconnects and extensive cache architecture provide the necessary infrastructure for high-performance applications, ensuring broad market adoption and versatile deployment options.

Ventana Micro Systems
AI Processor, Audio Processor, CPU, DSP Core, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Chimera GPNPU

The Quadric Chimera General Purpose Neural Processing Unit (GPNPU) delivers unparalleled performance for AI workloads, characterized by its ability to handle diverse and complex tasks without requiring separate processors for different operations. Designed to unify AI inference and traditional computing processes, the GPNPU supports matrix, vector, and scalar tasks within a single, cohesive execution pipeline. This design not only simplifies the integration of AI capabilities into system-on-chip (SoC) architectures but also significantly boosts developer productivity by allowing them to focus on optimizing rather than partitioning code. The Chimera GPNPU is highly scalable, supporting a wide range of operations across all market segments, including automotive applications with its ASIL-ready versions. With a performance range from 1 to 864 TOPS, it excels in running the latest AI models, such as vision transformers and large language models, alongside classic network backbones. This flexibility ensures that devices powered by Chimera GPNPU can adapt to advancing AI trends, making them suitable for applications that require both immediate performance and long-term capability. A key feature of the Chimera GPNPU is its fully programmable nature, making it a future-proof solution for deploying cutting-edge AI models. Unlike traditional NPUs that rely on hardwired operations, the Chimera GPNPU uses a software-driven approach with its source RTL form, making it a versatile option for inference in mobile, automotive, and edge computing applications. This programmability allows for easy updating and adaptation to new AI model operators, maximizing the lifespan and relevance of chips that utilize this technology.

Quadric
15 Categories
View Details

eSi-3250

Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

xcore.ai

The xcore.ai platform by XMOS is a versatile, high-performance microcontroller designed for the integration of AI, DSP, and real-time I/O processing. Focusing on bringing intelligence to the edge, this platform facilitates the construction of entire DSP systems using software without the need for multiple discrete chips. Its architecture is optimized for low-latency operation, making it suitable for diverse applications from consumer electronics to industrial automation. This platform offers a robust set of features conducive to sophisticated computational tasks, including support for AI workloads and enhanced control logic. The xcore.ai platform streamlines development processes by providing a cohesive environment that blends DSP capabilities with AI processing, enabling developers to realize complex applications with greater efficiency. By doing so, it reduces the complexity typically associated with chip integration in advanced systems. Designed for flexibility, xcore.ai supports a wide array of applications across various markets. Its ability to handle audio, voice, and general-purpose processing makes it an essential building block for smart consumer devices, industrial control systems, and AI-powered solutions. Coupled with comprehensive software support and development tools, the xcore.ai ensures a seamless integration path for developers aiming to push the boundaries of AI-enabled technologies.

XMOS Semiconductor
21 Categories
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module is designed for devices that require high-performance AI inference in a compact form factor. Powered by a quad-core Metis AI Processing Unit (AIPU), this module optimizes power consumption and integration, making it ideal for AI-driven applications. With a dedicated memory of 1 GB DRAM, it enhances the capabilities of vision processing systems, providing significant boosts in performance for devices with Next Generation Form Factor (NGFF) M.2 sockets. Ideal for use in computer vision systems and more, it offers hassle-free integration and evaluation with Axelera's Voyager SDK. This accelerator module is tailored for any application seeking to harness the power of AI processing efficiently. The Metis AIPU M.2 Module streamlines the deployment of AI applications, ensuring high performance with reduced power consumption.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

Talamo SDK

The Talamo Software Development Kit (SDK) is an advanced solution from Innatera designed to expedite the development of neuromorphic AI applications. It integrates seamlessly with PyTorch, providing developers with a familiar environment to build and extend AI models specifically for spiking neural processors. By enhancing the standard PyTorch workflow, Talamo simplifies the complexity associated with constructing spiking neural networks, allowing a broader range of developers to create sophisticated AI solutions without requiring deep expertise in neuromorphic computing. Talamo's capabilities include automatic mapping of trained models onto Innatera's heterogeneous computing architecture, coupled with a robust architecture simulator for efficient validation and iteration. This means developers can iterate quickly and efficiently, optimizing their applications for performance and power without extensive upfront reconfiguration or capital layout. The SDK supports the creation of collaborative application pipelines that merge signal processing with AI, supporting custom functions and neural network implementation. This gives developers the flexibility to tailor solutions to specific needs, be it in audio processing, gesture recognition, or environmental sensing. Through its comprehensive toolkit, Talamo SDK empowers users to translate conceptual models into high-performing AI applications that leverage the unique processing strengths of spiking neural networks, ultimately lowering barriers to innovation in low-power, edge-based AI.

Innatera Nanosystems
AI Processor, CPU, Multiprocessor / DSP, Processor Core Independent, Vision Processor
View Details

aiWare

The aiWare NPU (Neural Processing Unit) by aiMotive is a high-performance hardware solution tailored specifically for automotive AI applications. It is engineered to accelerate inference tasks for autonomous driving systems, ensuring excellent performance across a variety of neural network workloads. aiWare delivers significant flexibility and efficiency, capable of scaling from basic Level 2 applications to complex multi-sensor Level 3+ systems. Achieving up to 98% efficiency, aiWare's design focuses on minimizing power utilization while maximizing core performance. It supports a broad spectrum of neural network architectures, including convolutional neural networks, transformers, and recurrent networks, making it suitable for diverse AI tasks in the automotive sphere. The NPU's architecture allows for minimal external memory access, thanks to its highly efficient dataflow design that capitalizes on on-chip memory caching. With a robust toolkit known as aiWare Studio, engineers can efficiently optimize neural networks without in-depth knowledge of low-level programming, streamlining development and integration efforts. The aiWare hardware is also compatible with V2X communication and advanced driver assistance systems, adapting to various operational needs with great dexterity. Its comprehensive support for automotive safety standards further cements its reputation as a reliable choice for integrating artificial intelligence into next-generation vehicles.

aiMotive
11 Categories
View Details

SAKURA-II AI Accelerator

The SAKURA-II is a cutting-edge AI accelerator that combines high performance with low power consumption, designed to efficiently handle multi-billion parameter models for generative AI applications. It is particularly suited for tasks that demand real-time AI inferencing with minimal batch processing, making it ideal for applications devoted to edge environments. With a typical power usage of 8 watts and a compact footprint, the SAKURA-II achieves more than twice the AI compute efficiency of comparable solutions. This AI accelerator supports next-generation applications by providing up to 4x more DRAM bandwidth compared to alternatives, crucial for the processing of complex vision tasks and large language models (LLMs). The hardware offers advanced precision through software-enabled mixed-precision, which achieves near FP32 accuracy, while a unique sparse computing feature optimizes memory usage. Its robust memory architecture backs up to 32GB of DRAM, providing ample capacity for intensive AI workloads. The SAKURA-II's modular design allows it to be used in multiple form factors, addressing the diverse needs of modern computing tasks such as those found in smart cities, autonomous robotics, and smart manufacturing. Its adaptability is further enhanced by runtime configurable data paths, allowing the device to optimize task scheduling and resource allocation dynamically. These features are powered by the Dynamic Neural Accelerator engine, ensuring efficient computation and energy management.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

A25

The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.

Andes Technology
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores, Standard cell
View Details

RV12 RISC-V Processor

The RV12 RISC-V Processor is a highly configurable, single-core CPU that adheres to RV32I and RV64I standards. It’s engineered for the embedded market, offering a robust structure based on the RISC-V instruction set. The processor's architecture allows simultaneous instruction and data memory accesses, lending itself to a broad range of applications and maintaining high operational efficiency. This flexibility makes it an ideal choice for diverse execution requirements, supporting efficient data processing through an optimized CPU framework. Known for its adaptability, the RV12 processor can support multiple configurations to suit various application demands. It is capable of providing the necessary processing power for embedded systems, boasting a reputation for stability and reliability. This processor becomes integral for designs that require a maintainability of performance without compromising on the configurability aspect, meeting the rigorous needs of modern embedded computing. The processor's support of the open RISC-V architecture ensures its capability to integrate into existing systems seamlessly. It lends itself well to both industrial and academic applications, offering a resource-efficient platform that developers and researchers can easily access and utilize.

Roa Logic BV
AI Processor, CPU, Cryptography Software Library, IoT Processor, Microcontroller, Processor Cores
View Details

KL630 AI SoC

The KL630 is a pioneering AI chipset featuring Kneron's latest NPU architecture, which is the first to support Int4 precision and transformer networks. This cutting-edge design ensures exceptional compute efficiency with minimal energy consumption, making it ideal for a wide array of applications. With an ARM Cortex A5 CPU at its core, the KL630 excels in computation while maintaining low energy expenditure. This SOC is designed to handle both high and low light conditions optimally and is perfectly suited for use in diverse edge AI devices, from security systems to expansive city and automotive networks.

Kneron
TSMC
12nm LP/LP+
ADPCM, AI Processor, Camera Interface, CPU, GPU, Input/Output Controller, Processor Core Independent, USB, VGA, Vision Processor
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDRIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Speedster7t FPGAs

The Speedster7t FPGA family is crafted for high-bandwidth tasks, tackling the usual restrictions seen in conventional FPGAs. Manufactured using the TSMC 7nm FinFET process, these FPGAs are equipped with a pioneering 2D network-on-chip architecture and a series of machine learning processors for optimal high-bandwidth performance and AI/ML workloads. They integrate interfaces for high-paced GDDR6 memory, 400G Ethernet, and PCI Express Gen5 ports. This 2D network-on-chip connects various interfaces to upward of 80 access points in the FPGA fabric, enabling ASIC-like performance, yet retaining complete programmability. The product encourages users to start with the VectorPath accelerator card which houses the Speedster7t FPGA. This family offers robust tools for applications such as 5G infrastructure, computational storage, and test and measurement.

Achronix
TSMC
7nm
Processor Cores
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Ultra-Low-Power 64-Bit RISC-V Core

Micro Magic's Ultra-Low-Power 64-Bit RISC-V Core is engineered for superior energy efficiency, consuming a mere 10mW while operating at a clock speed of 1GHz. This processor core is designed to excel under low voltage conditions, delivering high performance without compromising on power conservation. It is ideal for applications requiring prolonged battery life or those operating in energy-sensitive environments. This processor integrates Micro Magic's advanced design methodologies, allowing for operation at frequencies up to 5GHz when necessary. The RISC-V Core capabilities are enhanced by solid construction, ensuring reliability and robust performance across various use-cases, making it an adaptable solution for modern electronic designs. With its cutting-edge design, this RISC-V core supports rapid deployment in numerous applications, especially in areas demanding high computational power alongside reduced energy usage. Micro Magic's advanced techniques ensure that this core is not only fast but also supports scalable integration into various systems with ease.

Micro Magic, Inc.
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator is a high-performance AI processor developed to meet the complex demands of artificial intelligence workloads. This accelerator is engineered with cutting-edge AI processing capabilities, enabling rapid data analysis and machine learning model inference. Designed for flexibility, the Hanguang 800 delivers superior computation speed and energy efficiency, making it an optimal choice for AI applications in a variety of sectors, from data centers to edge computing. By supporting high-volume data throughput, it enables organizations to achieve significant advantages in speed and efficiency, facilitating the deployment of intelligent solutions.

T-Head Semiconductor
AI Processor, CPU, IoT Processor, Processor Core Dependent, Security Processor, Vision Processor
View Details

AX45MP

The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.

Andes Technology
2D / 3D, ADPCM, CPU, IoT Processor, Processor Core Independent, Processor Cores, Vision Processor
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

KL520 AI SoC

The KL520 marks Kneron's foray into the edge AI landscape, offering an impressive combination of size, power efficiency, and performance. Armed with dual ARM Cortex M4 processors, this chip can operate independently or as a co-processor to enable AI functionalities such as smart locks and security monitoring. The KL520 is adept at 3D sensor integration, making it an excellent choice for applications in smart home ecosystems. Its compact design allows devices powered by it to operate on minimal power, such as running on AA batteries for extended periods, showcasing its exceptional power management capabilities.

Kneron
TSMC
65nm
AI Processor, Camera Interface, Clock Generator, CPU, GPU, IoT Processor, MPEG 4, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details

KL530 AI SoC

The KL530 represents a significant advancement in AI chip technology with a new NPU architecture optimized for both INT4 precision and transformer networks. This SOC is engineered to provide high processing efficiency and low power consumption, making it suitable for AIoT applications and other innovative scenarios. It features an ARM Cortex M4 CPU designed for low-power operation and offers a robust computational power of up to 1 TOPS. The chip's ISP enhances image quality, while its codec ensures efficient multimedia compression. Notably, the chip's cold start time is under 500 ms with an average power draw of less than 500 mW, establishing it as a leader in energy efficiency.

Kneron
TSMC
28nm SLP
AI Processor, Camera Interface, Clock Generator, CPU, CSC, GPU, IoT Processor, Peripheral Controller, Vision Processor
View Details

AndesCore Processors

AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.

Andes Technology
CPU, FlexRay, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

Crafted to deliver significant power savings, the Tianqiao-70 is a low-power RISC-V CPU that excels in commercial-grade scenarios. This 64-bit CPU core is primarily designed for applications where power efficiency is critical, such as mobile devices and computationally intensive IoT solutions. The core's architecture is specifically optimized to perform under stringent power budgets without compromising on the processing power needed for complex tasks. It provides an efficient solution for scenarios that demand reliable performance while maintaining a low energy footprint. Through its refined design, the Tianqiao-70 supports a broad spectrum of applications, including personal computing, machine learning, and mobile communications. Its versatility and power-awareness make it a preferred choice for developers focused on sustainable and scalable computing architectures.

StarFive Technology
AI Processor, CPU, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

3D Imaging Chip

Altek's 3D Imaging Chip is a breakthrough in the field of vision technology. Designed with an emphasis on depth perception, it enhances the accuracy of 3D scene capturing, making it ideal for applications requiring precise distance gauging such as autonomous vehicles and drones. The chip integrates seamlessly within complex systems, boasting superior recognition accuracy that ensures reliable and robust performance. Building upon years of expertise in 3D imaging, this chip supports multiple 3D modes, offering flexible solutions for devices from surveillance robots to delivery mechanisms. It facilitates medium-to-long-range detection needs thanks to its refined depth sensing capabilities. Altek's approach ensures a comprehensive package from modular design to chip production, creating a cohesive system that marries both hardware and software effectively. Deployed within various market segments, it delivers adaptable image solutions with dynamic design agility. Its imaging prowess is further enhanced by state-of-the-art algorithms that refine image quality and facilitate facial detection and recognition, thereby expanding its utility across diverse domains.

Altek Corporation
TSMC
16nm FFC/FF+
A/D Converter, Analog Front Ends, Coprocessor, Graphics & Video Modules, Image Conversion, JPEG, Oversampling Modulator, Photonics, PLL, Sensor, Vision Processor
View Details

WiseEye2 AI Solution

Himax's WiseEye2 AI solution is a pioneering technology aimed at ultra-low power sensor fusion for AI on-device applications. This innovative solution integrates artificial intelligence capabilities within consumer electronics, offering smart solutions for homes, cities, and various industrial applications. The WiseEye2 technology excels in enabling devices to perform complex AI tasks onsite without relying heavily on remote data centers. This feature not only minimizes latency but also enhances privacy aspects by processing data locally. It supports a range of applications from smart home appliances and intelligent security systems to cutting-edge consumer electronics. Designed with efficiency in mind, the WiseEye2 AI solution is built to operate under minimal power conditions, extending the battery life of devices it powers. This makes it ideal for portable and remote applications where energy conservation is critical.

Himax Technologies, Inc.
AI Processor, Vision Processor
View Details

NMP-750

The NMP-750 is AiM Future's powerful edge computing accelerator designed specifically for high-performance tasks. With up to 16 TOPS of computational throughput, this accelerator is perfect for automotive, AMRs, UAVs, as well as AR/VR applications. Fitted with up to 16 MB of local memory and featuring RISC-V or Arm Cortex-R/A 32-bit CPUs, it supports diverse data processing requirements crucial for modern technological solutions. The versatility of the NMP-750 is displayed in its ability to manage complex processes such as multi-camera stream processing and spectral efficiency management. It is also an apt choice for applications that require energy management and building automation, demonstrating exceptional potential in smart city and industrial setups. With its robust architecture, the NMP-750 ensures seamless integration into systems that need to handle large data volumes and support high-speed data transmission. This makes it ideal for applications in telecommunications and security where infrastructure resilience is paramount.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details

Azurite Core-hub

The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.

InCore Semiconductors
AI Processor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

EW6181 GPS and GNSS Silicon

The EW6181 is an advanced multi-GNSS silicon solution designed for high sensitivity and precision. This powerful chip supports GPS, Glonass, BeiDou, Galileo, SBAS, and A-GNSS, offering integration flexibility with various applications. Its built-in RF frontend and digital baseband facilitate robust signal processing, controlled by an ARM MCU. The EW6181 integrates essential interfaces for diverse connectivity, matched with DC-DC converters and LDOs to minimize BOM in battery-driven setups. This silicon marries low power demands with strong functional capabilities, thanks to proprietary algorithms that optimize its operation. It’s engineered to deliver exceptional accuracy and sensitivity in both standalone and cloud-related environments, adapting smoothly to connected ecosystems for enhanced efficiency. Its compact silicon footprint further enhances its suitability for applications needing prolonged battery life and reliable positioning. With a focus on Antenna Diversity, the EW6181 shines in dynamic applications like action cameras and smartwatches, ensuring clear signal reception even when devices rapidly rotate. This aspect accentuates the chip's ability to maintain consistent performance across a range of challenging environments, reinforcing its role in the forefront of GNSS technology.

EtherWhere Corporation
All Foundries
7nm
3GPP-5G, AI Processor, ATM / Utopia, Bluetooth, CAN, CAN XL, CAN-FD, Fibre Channel, FlexRay, GPS, JESD 204A / JESD 204B, OBSAI, Optical/Telecom, Photonics, RF Modules, USB, W-CDMA
View Details

ReRAM Memory

CrossBar's ReRAM Memory brings a revolutionary shift in the non-volatile memory sector, designed with a straightforward yet efficient three-layer structure. Comprising a top electrode, a switching medium, and a bottom electrode, ReRAM holds vast potential as a multiple-time programmable memory solution. Leveraging the resistive switching mechanism, the technology excels in meter-scale data storage applications, integrating seamlessly into AI-driven, IoT, and secure computing realities. The patented ReRAM technology is distinguished by its ability to perform at peak efficiency with notable read and write speeds, making it a suitable candidate for future-facing chip architectures that require swift, wide-ranging memory capabilities. Unprecedented in its energy-saving capabilities, CrossBar's ReRAM slashes energy consumption by up to 5 times compared to eFlash and offers substantial improvements over NAND and SPI Flash memories. Coupled with exceptional read latencies of around 20 nanoseconds and write times of approximately 12 microseconds, the memory technology outperforms existing solutions, enhancing system responsiveness and user experiences. Its high-density memory configurations provide terabyte-scale storage with minimal physical footprint, ensuring effective integration into cutting-edge devices and systems. Moreover, ReRAM's design permits its use within traditional CMOS manufacturing processes, enabling scalable, stackable arrays. This adaptability ensures that suppliers can integrate these memory solutions at various stages of semiconductor production, from standalone memory chips to embedded roles within complex system-on-chip designs. The inherent simplicity, combined with remarkable performance characteristics, positions ReRAM Memory as a key player in the advancement of secure, high-density computing.

CrossBar Inc.
CPU, Embedded Memories, Embedded Security Modules, Flash Controller, I/O Library, Mobile SDR Controller, NAND Flash, SDRAM Controller, Security Processor, SRAM Controller, Standard cell
View Details

AHB-Lite Timer

The AHB-Lite Timer module designed by Roa Logic is compliant with the RISC-V Privileged 1.9.1 specification, offering a versatile timing solution for embedded applications. As an integral peripheral, it provides precise timing functionalities, enabling applications to perform scheduled operations accurately. Its parameterized design allows developers to adjust the timer's features to match the needs of their system effectively. This timer module supports a broad scope of timing tasks, ranging from simple delay setups to complex timing sequences, making it ideal for various embedded system requirements. The flexibility in its design ensures straightforward implementation, reducing complexity and enhancing the overall performance of the target application. With RISC-V compliance at its core, the AHB-Lite Timer ensures synchronization and precision in signal delivery, crucial for systems tasked with critical timing operations. Its adaptable architecture and dependable functionality make it an exemplary choice for projects where timing accuracy is required.

Roa Logic BV
AMBA AHB / APB/ AXI, CPU, Cryptography Software Library, Input/Output Controller, Timer/Watchdog
View Details

RISC-V CPU IP N Class

The N Class RISC-V CPU IP from Nuclei is tailored for applications where space efficiency and power conservation are paramount. It features a 32-bit architecture and is highly suited for microcontroller applications within the AIoT realm. The N Class processors are crafted to provide robust processing capabilities while maintaining a minimal footprint, making them ideal candidates for devices that require efficient power management and secure operations. By adhering to the open RISC-V standard, Nuclei ensures that these processors can be seamlessly integrated into various solutions, offering customizable options to fit specific system requirements.

Nuclei System Technology
Building Blocks, CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores
View Details

RapidGPT - AI-Driven EDA Tool

RapidGPT is a next-generation electronic design automation tool powered by AI. Designed for those in the hardware engineering field, it allows for a seamless transition from ideas to physical hardware without the usual complexities of traditional design tools. The interface is highly intuitive, engaging users with natural language interaction to enhance productivity and reduce the time required for design iterations.\n\nEnhancing the entire design process, RapidGPT begins with concept development and guides users through to the final stages of bitstream or GDSII generation. This tool effectively acts as a co-pilot for engineers, allowing them to easily incorporate third-party IPs, making it adaptable for various project requirements. This adaptability is paramount for industries where speed and precision are of the essence.\n\nPrimisAI has integrated novel features such as AutoReview™, which provides automated HDL audits; AutoComment™, which generates AI-driven comments for HDL files; and AutoDoc™, which helps create comprehensive project documentation effortlessly. These features collectively make RapidGPT not only a design tool but also a comprehensive project management suite.\n\nThe effectiveness of RapidGPT is made evident in its robust support for various design complexities, providing a scalable solution that meets specific user demands from individual developers to large engineering teams seeking enterprise-grade capabilities.

PrimisAI
AMBA AHB / APB/ AXI, CPU, Ethernet, HDLC, Processor Core Independent
View Details

RAIV General Purpose GPU

RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, Wireless Processor
View Details

SCR3 Microcontroller Core

Syntacore's SCR3 microcontroller core is a versatile option for developers looking to harness the power of a 5-stage in-order pipeline. Designed to support both 32-bit and 64-bit symmetric multiprocessing (SMP) configurations, this core is perfectly aligned with the needs of embedded applications requiring moderate power and resource efficiency coupled with enhanced processing capabilities. The architecture is fine-tuned to handle a variety of workloads, ensuring a balance between performance and power usage, making it suitable for sectors such as industrial automation, automotive sensors, and IoT devices. The inclusion of privilege modes, memory protection units (MPUs), and cache systems further enhances its capabilities, particularly in environments where system security and reliability are paramount. Developers will find the SCR3 core to be highly adaptable, fitting seamlessly into designs that need scalability and modularity. Syntacore's comprehensive toolkit, combined with detailed documentation, ensures that system integration is both quick and reliable, providing a robust foundation for varied applications.

Syntacore
Building Blocks, CPU, DSP Core, Microcontroller, Processor Cores
View Details

Ceva-SensPro2 - Vision AI DSP

The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)

Ceva, Inc.
DSP Core, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

NPU

The Neural Processing Unit (NPU) offered by OPENEDGES is engineered to accelerate machine learning tasks and AI computations. Designed for integration into advanced processing platforms, this NPU enhances the ability of devices to perform complex neural network computations quickly and efficiently, significantly advancing AI capabilities. This NPU is built to handle both deep learning and inferencing workloads, utilizing highly efficient data management processes. It optimizes the execution of neural network models with acceleration capabilities that reduce power consumption and latency, making it an excellent choice for real-time AI applications. The architecture is flexible and scalable, allowing it to be tailored for specific application needs or hardware constraints. With support for various AI frameworks and models, the OPENEDGES NPU ensures compatibility and smooth integration with existing AI solutions. This allows companies to leverage cutting-edge AI performance without the need for drastic changes to legacy systems, making it a forward-compatible and cost-effective solution for modern AI applications.

OPENEDGES Technology, Inc.
AI Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent
View Details

aiSim 5

aiSim 5 is a state-of-the-art automotive simulation platform designed for ADAS and autonomous driving testing. Recognized as the world's first ISO26262 ASIL-D certified simulator, it offers unparalleled accuracy and determinism in simulating various driving scenarios and environmental conditions. The simulator integrates AI-based digital twin technology and an advanced rendering engine to create realistic traffic scenarios, helping engineers verify and validate driver assistance systems. Harnessing powerful physics-based simulation capabilities, aiSim 5 replicates real-world phenomena like weather effects and complex traffic dynamics with precision. By offering a comprehensive set of 3D assets and scenarios, it allows for the extensive testing of systems in both typical and edge conditions. With its flexible and open architecture, aiSim 5 can seamlessly integrate into existing testing toolchains, supporting significant variations in sensor configurations and driving algorithms. The platform encourages innovation in simulation methodologies by providing tools for scenario randomization and synthetic data generation, crucial for developing resilient ADAS applications. Additionally, its cloud-ready architecture makes it applicable across various hardware platforms, turning simulation into a versatile resource available on inexpensive or high-end computing configurations alike.

aiMotive
24 Categories
View Details

Codasip RISC-V BK Core Series

The Codasip RISC-V BK Core Series is designed to offer flexible and high-performance core options catering to a wide range of applications, from low-power tasks to intricate computational needs. This series achieves optimal balance in power consumption and processing speed, making it suitable for applications demanding energy efficiency without compromising performance. These cores are fully RISC-V compliant, allowing for easy customizations to suit specific needs by modifying the processor's architecture or instruction set through Codasip Studio. The BK Core Series provides a streamlining process for developing precise computing solutions, ideal for IoT edge devices and sensor controllers where both small area and low power are critical. Moreover, the BK Core Series supports architectural exploration, enabling users to optimize the core design specifically tailored to their applications. This capability ensures that each core delivers the expected power, efficiency, and performance metrics required by modern technological solutions.

Codasip
AI Processor, Building Blocks, CPU, DSP Core, IoT Processor, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

General Purpose Accelerator (Aptos)

The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.

Ascenium
TSMC
10nm, 12nm
CPU, Processor Core Dependent, Processor Core Independent, Processor Cores, Standard cell
View Details

KL720 AI SoC

The KL720 AI SoC is designed for optimal performance-to-power ratios, achieving 0.9 TOPS per watt. This makes it one of the most efficient chips available for edge AI applications. The SOC is crafted to meet high processing demands, suitable for high-end devices including smart TVs, AI glasses, and advanced cameras. With an ARM Cortex M4 CPU, it enables superior 4K imaging, full HD video processing, and advanced 3D sensing capabilities. The KL720 also supports natural language processing (NLP), making it ideal for emerging AI interfaces such as AI assistants and gaming gesture controls.

Kneron
TSMC
16nm FFC/FF+
2D / 3D, AI Processor, Audio Interfaces, AV1, Camera Interface, CPU, GPU, Image Conversion, TICO, Vision Processor
View Details

eSi-1650

The eSi-1650 is a compact, low-power 16-bit CPU core integrating an instruction cache, making it an ideal choice for mature process nodes reliant on OTP or Flash program memory. By omitting large on-chip RAMs, the IP core optimizes power and area efficiency and permits the CPU to capitalize on its maximum operational frequency beyond OTP/Flash constraints.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, Microcontroller, Processor Cores
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

No credit card or payment details required.

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt