Log In

All IPs > Processor > Vision Processor

Vision Processor Semiconductor IPs

Vision processors are a specialized subset of semiconductor IPs designed to efficiently handle and process visual data. These processors are pivotal in applications that require intensive image analysis and computer vision capabilities, such as artificial intelligence, augmented reality, virtual reality, and autonomous systems. The primary purpose of vision processor IPs is to accelerate the performance of vision processing tasks while minimizing power consumption and maximizing throughput.

In the world of semiconductor IP, vision processors stand out due to their ability to integrate advanced functionalities such as object recognition, image stabilization, and real-time analytics. These processors often leverage parallel processing, machine learning algorithms, and specialized hardware accelerators to perform complex visual computations efficiently. As a result, products ranging from high-end smartphones to advanced driver-assistance systems (ADAS) and industrial robots benefit from improved visual understanding and processing capabilities.

The semiconductor IPs for vision processors can be found in a wide array of products. In consumer electronics, they enhance the capabilities of cameras, enabling features like face and gesture recognition. In the automotive industry, vision processors are crucial for delivering real-time data processing needed for safety systems and autonomous navigation. Additionally, in sectors such as healthcare and manufacturing, vision processor IPs facilitate advanced imaging and diagnostic tools, improving both precision and efficiency.

As technology advances, the demand for vision processor IPs continues to grow. Developers and designers seek IPs that offer scalable architectures and can be customized to meet specific application requirements. By providing enhanced performance and reducing development time, vision processor semiconductor IPs are integral to pushing the boundaries of what's possible with visual data processing and expanding the capabilities of next-generation products.

All semiconductor IP

Akida Neural Processor IP

Akida's Neural Processor IP represents a leap in AI architecture design, tailored to provide exceptional energy efficiency and processing speed for an array of edge computing tasks. At its core, the processor mimics the synaptic activity of the human brain, efficiently executing tasks that demand high-speed computation and minimal power usage. This processor is equipped with configurable neural nodes capable of supporting innovative AI frameworks such as convolutional and fully-connected neural network processes. Each node accommodates a range of MAC operations, enhancing scalability from basic to complex deployment requirements. This scalability enables the development of lightweight AI solutions suited for consumer electronics as well as robust systems for industrial use. Onboard features like event-based processing and low-latency data communication significantly decrease the strain on host processors, enabling faster and more autonomous system responses. Akida's versatile functionality and ability to learn on the fly make it a cornerstone for next-generation technology solutions that aim to blend cognitive computing with practical, real-world applications.

BrainChip
AI Processor, Coprocessor, CPU, Digital Video Broadcast, Network on Chip, Platform Security, Processor Core Independent, Vision Processor
View Details

KL730 AI SoC

The KL730 is a third-generation AI chip that integrates advanced reconfigurable NPU architecture, delivering up to 8 TOPS of computing power. This cutting-edge technology enhances computational efficiency across a range of applications, including CNN and transformer networks, while minimizing DDR bandwidth requirements. The KL730 also boasts enhanced video processing capabilities, supporting 4K 60FPS outputs. With expertise spanning over a decade in ISP technology, the KL730 stands out with its noise reduction, wide dynamic range, fisheye correction, and low-light imaging performance. It caters to markets like intelligent security, autonomous vehicles, video conferencing, and industrial camera systems, among others.

Kneron
TSMC
12nm
16 Categories
View Details

Akida 2nd Generation

The second-generation Akida platform builds upon the foundation of its predecessor with enhanced computational capabilities and increased flexibility for a broader range of AI and machine learning applications. This version supports 8-bit weights and activations in addition to the flexible 4- and 1-bit operations, making it a versatile solution for high-performance AI tasks. Akida 2 introduces support for programmable activation functions and skip connections, further enhancing the efficiency of neural network operations. These capabilities are particularly advantageous for implementing sophisticated machine learning models that require complex, interconnected processing layers. The platform also features support for Spatio-Temporal and Temporal Event-Based Neural Networks, advancing its application in real-time, on-device AI scenarios. Built as a silicon-proven, fully digital neuromorphic solution, Akida 2 is designed to integrate seamlessly with various microcontrollers and application processors. Its highly configurable architecture offers post-silicon flexibility, making it an ideal choice for developers looking to tailor AI processing to specific application needs. Whether for low-latency video processing, real-time sensor data analysis, or interactive voice recognition, Akida 2 provides a robust platform for next-generation AI developments.

BrainChip
11 Categories
View Details

Metis AIPU PCIe AI Accelerator Card

The Metis AIPU PCIe AI Accelerator Card is engineered for developers demanding superior AI performance. With its quad-core Metis AIPU, this card delivers up to 214 TOPS, tackling challenging vision applications with unmatched efficiency. The PCIe card is designed with user-friendly integration in mind, featuring the Voyager SDK software stack that accelerates application deployment. Offering impressive processing speeds, the card supports up to 3,200 FPS for ResNet-50 models, providing a competitive edge for demanding AI tasks. Its design ensures it meets the needs of a wide array of AI applications, allowing for scalability and adaptability in various use cases.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Ethernet, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

Akida IP

The Akida IP is a groundbreaking neural processor designed to emulate the cognitive functions of the human brain within a compact and energy-efficient architecture. This processor is specifically built for edge computing applications, providing real-time AI processing for vision, audio, and sensor fusion tasks. The scalable neural fabric, ranging from 1 to 128 nodes, features on-chip learning capabilities, allowing devices to adapt and learn from new data with minimal external inputs, enhancing privacy and security by keeping data processing localized. Akida's unique design supports 4-, 2-, and 1-bit weight and activation operations, maximizing computational efficiency while minimizing power consumption. This flexibility in configuration, combined with a fully digital neuromorphic implementation, ensures a cost-effective and predictable design process. Akida is also equipped with event-based acceleration, drastically reducing the demands on the host CPU by facilitating efficient data handling and processing directly within the sensor network. Additionally, Akida's on-chip learning supports incremental learning techniques like one-shot and few-shot learning, making it ideal for applications that require quick adaptation to new data. These features collectively support a broad spectrum of intelligent computing tasks, including object detection and signal processing, all performed at the edge, thus eliminating the need for constant cloud connectivity.

BrainChip
AI Processor, Audio Processor, Coprocessor, CPU, Cryptography Cores, GPU, Input/Output Controller, IoT Processor, Platform Security, Processor Core Independent, Vision Processor
View Details

Yitian 710 Processor

The Yitian 710 Processor is a groundbreaking component in processor technology, designed with cutting-edge architecture to enhance computational efficiency. This processor is tailored for cloud-native environments, offering robust support for high-demand computing tasks. It is engineered to deliver significant improvements in performance, making it an ideal choice for data centers aiming to optimize their processing power and energy efficiency. With its advanced features, the Yitian 710 stands at the forefront of processor innovation, ensuring seamless integration with diverse technology platforms and enhancing the overall computing experience across industries.

T-Head Semiconductor
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

MetaTF

MetaTF is BrainChip's premier development tool platform designed to complement its neuromorphic technology solutions. This platform is a comprehensive toolkit that empowers developers to convert and optimize standard machine learning models into formats compatible with BrainChip's Akida technology. One of its key advantages is its ability to adjust models into sparse formats, enhancing processing speed and reducing power consumption. The MetaTF framework provides an intuitive interface for integrating BrainChip’s specialized AI capabilities into existing workflows. It supports streamlined adaptation of models to ensure they are optimized for the unique characteristics of neuromorphic processing. Developers can utilize MetaTF to rapidly iterate and refine AI models, making the deployment process smoother and more efficient. By providing direct access to pre-trained models and tuning mechanisms, MetaTF allows developers to capitalize on the benefits of event-based neural processing with minimal configuration effort. This platform is crucial for advancing the application of machine learning across diverse fields such as IoT devices, healthcare technology, and smart infrastructure.

BrainChip
AI Processor, Coprocessor, Processor Core Independent, Vision Processor
View Details

AI Camera Module

The AI Camera Module from Altek is a versatile, high-performance component designed to meet the increasing demand for smart vision solutions. This module features a rich integration of imaging lens design and combines both hardware and software capacities to create a seamless operational experience. Its design is reinforced by Altek's deep collaboration with leading global brands, ensuring a top-tier product capable of handling diverse market requirements. Equipped to cater to AI and IoT interplays, the module delivers outstanding capabilities that align with the expectations for high-resolution imaging, making it suitable for edge computing applications. The AI Camera Module ensures that end-user diversity is meaningfully addressed, offering customization in device functionality which supports advanced processing requirements such as 2K and 4K video quality. This module showcases Altek's prowess in providing comprehensive, all-in-one camera solutions which leverage sophisticated imaging and rapid processing to handle challenging conditions and demands. The AI Camera's technical blueprint supports complex AI algorithms, enhancing not just image quality but also the device's interactive capacity through facial recognition and image tracking technology.

Altek Corporation
Samsung
22nm
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Audio Interfaces, GPU, Image Conversion, IoT Processor, JPEG, Receiver/Transmitter, SATA, Vision Processor
View Details

Chimera GPNPU

The Quadric Chimera General Purpose Neural Processing Unit (GPNPU) delivers unparalleled performance for AI workloads, characterized by its ability to handle diverse and complex tasks without requiring separate processors for different operations. Designed to unify AI inference and traditional computing processes, the GPNPU supports matrix, vector, and scalar tasks within a single, cohesive execution pipeline. This design not only simplifies the integration of AI capabilities into system-on-chip (SoC) architectures but also significantly boosts developer productivity by allowing them to focus on optimizing rather than partitioning code. The Chimera GPNPU is highly scalable, supporting a wide range of operations across all market segments, including automotive applications with its ASIL-ready versions. With a performance range from 1 to 864 TOPS, it excels in running the latest AI models, such as vision transformers and large language models, alongside classic network backbones. This flexibility ensures that devices powered by Chimera GPNPU can adapt to advancing AI trends, making them suitable for applications that require both immediate performance and long-term capability. A key feature of the Chimera GPNPU is its fully programmable nature, making it a future-proof solution for deploying cutting-edge AI models. Unlike traditional NPUs that rely on hardwired operations, the Chimera GPNPU uses a software-driven approach with its source RTL form, making it a versatile option for inference in mobile, automotive, and edge computing applications. This programmability allows for easy updating and adaptation to new AI model operators, maximizing the lifespan and relevance of chips that utilize this technology.

Quadric
15 Categories
View Details

xcore.ai

The xcore.ai platform by XMOS is a versatile, high-performance microcontroller designed for the integration of AI, DSP, and real-time I/O processing. Focusing on bringing intelligence to the edge, this platform facilitates the construction of entire DSP systems using software without the need for multiple discrete chips. Its architecture is optimized for low-latency operation, making it suitable for diverse applications from consumer electronics to industrial automation. This platform offers a robust set of features conducive to sophisticated computational tasks, including support for AI workloads and enhanced control logic. The xcore.ai platform streamlines development processes by providing a cohesive environment that blends DSP capabilities with AI processing, enabling developers to realize complex applications with greater efficiency. By doing so, it reduces the complexity typically associated with chip integration in advanced systems. Designed for flexibility, xcore.ai supports a wide array of applications across various markets. Its ability to handle audio, voice, and general-purpose processing makes it an essential building block for smart consumer devices, industrial control systems, and AI-powered solutions. Coupled with comprehensive software support and development tools, the xcore.ai ensures a seamless integration path for developers aiming to push the boundaries of AI-enabled technologies.

XMOS Semiconductor
21 Categories
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module is designed for devices that require high-performance AI inference in a compact form factor. Powered by a quad-core Metis AI Processing Unit (AIPU), this module optimizes power consumption and integration, making it ideal for AI-driven applications. With a dedicated memory of 1 GB DRAM, it enhances the capabilities of vision processing systems, providing significant boosts in performance for devices with Next Generation Form Factor (NGFF) M.2 sockets. Ideal for use in computer vision systems and more, it offers hassle-free integration and evaluation with Axelera's Voyager SDK. This accelerator module is tailored for any application seeking to harness the power of AI processing efficiently. The Metis AIPU M.2 Module streamlines the deployment of AI applications, ensuring high performance with reduced power consumption.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

Talamo SDK

The Talamo Software Development Kit (SDK) is an advanced solution from Innatera designed to expedite the development of neuromorphic AI applications. It integrates seamlessly with PyTorch, providing developers with a familiar environment to build and extend AI models specifically for spiking neural processors. By enhancing the standard PyTorch workflow, Talamo simplifies the complexity associated with constructing spiking neural networks, allowing a broader range of developers to create sophisticated AI solutions without requiring deep expertise in neuromorphic computing. Talamo's capabilities include automatic mapping of trained models onto Innatera's heterogeneous computing architecture, coupled with a robust architecture simulator for efficient validation and iteration. This means developers can iterate quickly and efficiently, optimizing their applications for performance and power without extensive upfront reconfiguration or capital layout. The SDK supports the creation of collaborative application pipelines that merge signal processing with AI, supporting custom functions and neural network implementation. This gives developers the flexibility to tailor solutions to specific needs, be it in audio processing, gesture recognition, or environmental sensing. Through its comprehensive toolkit, Talamo SDK empowers users to translate conceptual models into high-performing AI applications that leverage the unique processing strengths of spiking neural networks, ultimately lowering barriers to innovation in low-power, edge-based AI.

Innatera Nanosystems
AI Processor, CPU, Multiprocessor / DSP, Processor Core Independent, Vision Processor
View Details

aiWare

The aiWare NPU (Neural Processing Unit) by aiMotive is a high-performance hardware solution tailored specifically for automotive AI applications. It is engineered to accelerate inference tasks for autonomous driving systems, ensuring excellent performance across a variety of neural network workloads. aiWare delivers significant flexibility and efficiency, capable of scaling from basic Level 2 applications to complex multi-sensor Level 3+ systems. Achieving up to 98% efficiency, aiWare's design focuses on minimizing power utilization while maximizing core performance. It supports a broad spectrum of neural network architectures, including convolutional neural networks, transformers, and recurrent networks, making it suitable for diverse AI tasks in the automotive sphere. The NPU's architecture allows for minimal external memory access, thanks to its highly efficient dataflow design that capitalizes on on-chip memory caching. With a robust toolkit known as aiWare Studio, engineers can efficiently optimize neural networks without in-depth knowledge of low-level programming, streamlining development and integration efforts. The aiWare hardware is also compatible with V2X communication and advanced driver assistance systems, adapting to various operational needs with great dexterity. Its comprehensive support for automotive safety standards further cements its reputation as a reliable choice for integrating artificial intelligence into next-generation vehicles.

aiMotive
11 Categories
View Details

SAKURA-II AI Accelerator

The SAKURA-II is a cutting-edge AI accelerator that combines high performance with low power consumption, designed to efficiently handle multi-billion parameter models for generative AI applications. It is particularly suited for tasks that demand real-time AI inferencing with minimal batch processing, making it ideal for applications devoted to edge environments. With a typical power usage of 8 watts and a compact footprint, the SAKURA-II achieves more than twice the AI compute efficiency of comparable solutions. This AI accelerator supports next-generation applications by providing up to 4x more DRAM bandwidth compared to alternatives, crucial for the processing of complex vision tasks and large language models (LLMs). The hardware offers advanced precision through software-enabled mixed-precision, which achieves near FP32 accuracy, while a unique sparse computing feature optimizes memory usage. Its robust memory architecture backs up to 32GB of DRAM, providing ample capacity for intensive AI workloads. The SAKURA-II's modular design allows it to be used in multiple form factors, addressing the diverse needs of modern computing tasks such as those found in smart cities, autonomous robotics, and smart manufacturing. Its adaptability is further enhanced by runtime configurable data paths, allowing the device to optimize task scheduling and resource allocation dynamically. These features are powered by the Dynamic Neural Accelerator engine, ensuring efficient computation and energy management.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

KL630 AI SoC

The KL630 is a pioneering AI chipset featuring Kneron's latest NPU architecture, which is the first to support Int4 precision and transformer networks. This cutting-edge design ensures exceptional compute efficiency with minimal energy consumption, making it ideal for a wide array of applications. With an ARM Cortex A5 CPU at its core, the KL630 excels in computation while maintaining low energy expenditure. This SOC is designed to handle both high and low light conditions optimally and is perfectly suited for use in diverse edge AI devices, from security systems to expansive city and automotive networks.

Kneron
TSMC
12nm LP/LP+
ADPCM, AI Processor, Camera Interface, CPU, GPU, Input/Output Controller, Processor Core Independent, USB, VGA, Vision Processor
View Details

AX45MP

The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.

Andes Technology
2D / 3D, ADPCM, CPU, IoT Processor, Processor Core Independent, Processor Cores, Vision Processor
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator is a high-performance AI processor developed to meet the complex demands of artificial intelligence workloads. This accelerator is engineered with cutting-edge AI processing capabilities, enabling rapid data analysis and machine learning model inference. Designed for flexibility, the Hanguang 800 delivers superior computation speed and energy efficiency, making it an optimal choice for AI applications in a variety of sectors, from data centers to edge computing. By supporting high-volume data throughput, it enables organizations to achieve significant advantages in speed and efficiency, facilitating the deployment of intelligent solutions.

T-Head Semiconductor
AI Processor, CPU, IoT Processor, Processor Core Dependent, Security Processor, Vision Processor
View Details

KL520 AI SoC

The KL520 marks Kneron's foray into the edge AI landscape, offering an impressive combination of size, power efficiency, and performance. Armed with dual ARM Cortex M4 processors, this chip can operate independently or as a co-processor to enable AI functionalities such as smart locks and security monitoring. The KL520 is adept at 3D sensor integration, making it an excellent choice for applications in smart home ecosystems. Its compact design allows devices powered by it to operate on minimal power, such as running on AA batteries for extended periods, showcasing its exceptional power management capabilities.

Kneron
TSMC
65nm
AI Processor, Camera Interface, Clock Generator, CPU, GPU, IoT Processor, MPEG 4, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details

KL530 AI SoC

The KL530 represents a significant advancement in AI chip technology with a new NPU architecture optimized for both INT4 precision and transformer networks. This SOC is engineered to provide high processing efficiency and low power consumption, making it suitable for AIoT applications and other innovative scenarios. It features an ARM Cortex M4 CPU designed for low-power operation and offers a robust computational power of up to 1 TOPS. The chip's ISP enhances image quality, while its codec ensures efficient multimedia compression. Notably, the chip's cold start time is under 500 ms with an average power draw of less than 500 mW, establishing it as a leader in energy efficiency.

Kneron
TSMC
28nm SLP
AI Processor, Camera Interface, Clock Generator, CPU, CSC, GPU, IoT Processor, Peripheral Controller, Vision Processor
View Details

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

Crafted to deliver significant power savings, the Tianqiao-70 is a low-power RISC-V CPU that excels in commercial-grade scenarios. This 64-bit CPU core is primarily designed for applications where power efficiency is critical, such as mobile devices and computationally intensive IoT solutions. The core's architecture is specifically optimized to perform under stringent power budgets without compromising on the processing power needed for complex tasks. It provides an efficient solution for scenarios that demand reliable performance while maintaining a low energy footprint. Through its refined design, the Tianqiao-70 supports a broad spectrum of applications, including personal computing, machine learning, and mobile communications. Its versatility and power-awareness make it a preferred choice for developers focused on sustainable and scalable computing architectures.

StarFive Technology
AI Processor, CPU, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

3D Imaging Chip

Altek's 3D Imaging Chip is a breakthrough in the field of vision technology. Designed with an emphasis on depth perception, it enhances the accuracy of 3D scene capturing, making it ideal for applications requiring precise distance gauging such as autonomous vehicles and drones. The chip integrates seamlessly within complex systems, boasting superior recognition accuracy that ensures reliable and robust performance. Building upon years of expertise in 3D imaging, this chip supports multiple 3D modes, offering flexible solutions for devices from surveillance robots to delivery mechanisms. It facilitates medium-to-long-range detection needs thanks to its refined depth sensing capabilities. Altek's approach ensures a comprehensive package from modular design to chip production, creating a cohesive system that marries both hardware and software effectively. Deployed within various market segments, it delivers adaptable image solutions with dynamic design agility. Its imaging prowess is further enhanced by state-of-the-art algorithms that refine image quality and facilitate facial detection and recognition, thereby expanding its utility across diverse domains.

Altek Corporation
TSMC
16nm FFC/FF+
A/D Converter, Analog Front Ends, Coprocessor, Graphics & Video Modules, Image Conversion, JPEG, Oversampling Modulator, Photonics, PLL, Sensor, Vision Processor
View Details

WiseEye2 AI Solution

Himax's WiseEye2 AI solution is a pioneering technology aimed at ultra-low power sensor fusion for AI on-device applications. This innovative solution integrates artificial intelligence capabilities within consumer electronics, offering smart solutions for homes, cities, and various industrial applications. The WiseEye2 technology excels in enabling devices to perform complex AI tasks onsite without relying heavily on remote data centers. This feature not only minimizes latency but also enhances privacy aspects by processing data locally. It supports a range of applications from smart home appliances and intelligent security systems to cutting-edge consumer electronics. Designed with efficiency in mind, the WiseEye2 AI solution is built to operate under minimal power conditions, extending the battery life of devices it powers. This makes it ideal for portable and remote applications where energy conservation is critical.

Himax Technologies, Inc.
AI Processor, Vision Processor
View Details

Ceva-SensPro2 - Vision AI DSP

The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)

Ceva, Inc.
DSP Core, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

RAIV General Purpose GPU

RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, Wireless Processor
View Details

aiSim 5

aiSim 5 is a state-of-the-art automotive simulation platform designed for ADAS and autonomous driving testing. Recognized as the world's first ISO26262 ASIL-D certified simulator, it offers unparalleled accuracy and determinism in simulating various driving scenarios and environmental conditions. The simulator integrates AI-based digital twin technology and an advanced rendering engine to create realistic traffic scenarios, helping engineers verify and validate driver assistance systems. Harnessing powerful physics-based simulation capabilities, aiSim 5 replicates real-world phenomena like weather effects and complex traffic dynamics with precision. By offering a comprehensive set of 3D assets and scenarios, it allows for the extensive testing of systems in both typical and edge conditions. With its flexible and open architecture, aiSim 5 can seamlessly integrate into existing testing toolchains, supporting significant variations in sensor configurations and driving algorithms. The platform encourages innovation in simulation methodologies by providing tools for scenario randomization and synthetic data generation, crucial for developing resilient ADAS applications. Additionally, its cloud-ready architecture makes it applicable across various hardware platforms, turning simulation into a versatile resource available on inexpensive or high-end computing configurations alike.

aiMotive
24 Categories
View Details

KL720 AI SoC

The KL720 AI SoC is designed for optimal performance-to-power ratios, achieving 0.9 TOPS per watt. This makes it one of the most efficient chips available for edge AI applications. The SOC is crafted to meet high processing demands, suitable for high-end devices including smart TVs, AI glasses, and advanced cameras. With an ARM Cortex M4 CPU, it enables superior 4K imaging, full HD video processing, and advanced 3D sensing capabilities. The KL720 also supports natural language processing (NLP), making it ideal for emerging AI interfaces such as AI assistants and gaming gesture controls.

Kneron
TSMC
16nm FFC/FF+
2D / 3D, AI Processor, Audio Interfaces, AV1, Camera Interface, CPU, GPU, Image Conversion, TICO, Vision Processor
View Details

Spiking Neural Processor T1 - Ultra-lowpower Microcontroller for Sensing

The Spiking Neural Processor T1 is designed as a highly efficient microcontroller that integrates neuromorphic intelligence closely with sensors. It employs a unique spiking neural network engine paired with a nimble RISC-V processor core, forming a cohesive unit for advanced data processing. With this setup, the T1 excels in delivering next-gen AI capabilities embedded directly at the sensor, operating within an exceptionally low power consumption range, ideal for battery-dependent and latency-sensitive applications. This processor marks a notable advancement in neuromorphic technology, allowing for real-time pattern recognition with minimal power draw. It supports various interfaces like QSPI, I2C, and UART, fitting into a compact 2.16mm x 3mm package, which facilitates easy integration into diverse electronic devices. Additionally, its architecture is designed to process different neural network models efficiently, from spiking to deep neural networks, providing versatility across applications. The T1 Evaluation Kit furthers this ease of adoption by enabling developers to use the Talamo SDK to create or deploy applications readily. It includes tools for performance profiling and supports numerous common sensors, making it a strong candidate for projects aiming to leverage low-power, intelligent processing capabilities. This innovative chip's ability to manage power efficiency with high-speed pattern processing makes it especially suitable for advanced sensing tasks found in wearables, smart home devices, and more.

Innatera Nanosystems
TSMC
22nm
AI Processor, Coprocessor, CPU, DSP Core, Input/Output Controller, IoT Processor, Microcontroller, Multiprocessor / DSP, Standard cell, Vision Processor, Wireless Processor
View Details

Maverick-2 Intelligent Compute Accelerator

The Maverick-2 Intelligent Compute Accelerator (ICA) by Next Silicon represents a transformative leap in high-performance compute architecture. It seamlessly integrates into HPC systems with a pioneering software-defined approach that dynamically optimizes hardware configurations based on real-time application demands. This enables high efficiency and unparalleled performance across diverse workloads including HPC, AI, and other data-intensive applications. Maverick-2 harnesses a 5nm process technology, utilizing HBM3E memory for enhanced data throughput and efficient energy usage.\n\nBuilt with developers in mind, Maverick-2 supports an array of programming languages such as C/C++, FORTRAN, and OpenMP without the necessity for proprietary stacks. This flexibility not only mitigates porting challenges but significantly reduces development time and costs. A distinguishing feature of Maverick-2 is its real-time telemetry capabilities that provide valuable insights into performance metrics, allowing for refined optimizations during execution.\n\nThe architecture supports versatile interfaces such as PCIe Gen 5 and offers configurations that accommodate complex workloads using either single or dual-die setups. Its intelligent algorithms autonomously identify computational bottlenecks to enhance throughput and scalability, thus future-proofing investments as computing demands evolve. Maverick-2's utility spans various sectors including life sciences, energy, and fintech, underlining its adaptability and high-performance capabilities.

Next Silicon Ltd.
TSMC
28nm
11 Categories
View Details

RayCore MC Ray Tracing GPU

The RayCore MC is a revolutionary real-time path and ray-tracing GPU designed to enhance rendering with minimal power consumption. This GPU IP is tailored for real-time applications, offering a rich graphical experience without compromising on speed or efficiency. By utilizing advanced ray-tracing capabilities, RayCore MC provides stunning visual effects and lifelike animations, setting a high standard for quality in digital graphics. Engineered for scalability and performance, RayCore MC stands out in the crowded field of GPU technologies by delivering seamless, low-latency graphics. It is particularly suited for applications in gaming, virtual reality, and the burgeoning metaverse, where realistic rendering is paramount. The architecture supports efficient data management, ensuring that even the most complex visual tasks are handled with ease. RayCore MC's architecture supports a wide array of applications beyond entertainment, making it a vital tool in areas such as autonomous vehicles and data-driven industries. Its blend of power efficiency and graphical prowess ensures that developers can rely on RayCore MC for cutting-edge, resource-light graphic solutions.

Siliconarts, Inc.
2D / 3D, Audio Processor, CPU, GPU, Graphics & Video Modules, Vision Processor
View Details

2D FFT

The 2D FFT core is designed to efficiently handle two-dimensional FFT processing, ideal for applications in image and video processing where data is inherently two-dimensional. This core is engineered to integrate both internal and external memory configurations, which optimize data handling for complex multimedia processing tasks, ensuring a high level of performance is maintained throughout. Utilizing sophisticated algorithms, the 2D FFT core processes data through two FFT engines. This dual approach maximizes throughput, typically limiting bottlenecks to memory bandwidth constraints rather than computational delays. This efficiency is critical for applications handling large volumes of multimedia data where real-time processing is a requisite. The capacity of the 2D FFT core to adapt to varying processing environments marks its versatility in the digital processing landscape. By ensuring robust data processing capabilities, it addresses the challenges of dynamic data movement, providing the reliability necessary for multimedia systems. Its strategic design supports the execution of intensive computational tasks while maintaining the operational flow integral to real-time applications.

Dillon Engineering, Inc.
Tower, VIS
80nm, 180nm
Coprocessor, Ethernet, Image Conversion, Network on Chip, Receiver/Transmitter, Vision Processor
View Details

AI Inference Platform

Designed to cater to AI-specific needs, SEMIFIVE’s AI Inference Platform provides tailored solutions that seamlessly integrate advanced technologies to optimize performance and efficiency. This platform is engineered to handle the rigorous demands of AI workloads through a well-integrated approach combining hardware and software innovations matched with AI acceleration features. The platform supports scalable AI models, delivering exceptional processing capabilities for tasks involving neural network inference. With a focus on maximizing throughput and efficiency, it facilitates real-time processing and decision-making, which is crucial for applications such as machine learning and data analytics. SEMIFIVE’s platform simplifies AI implementation by providing an extensive suite of development tools and libraries that accelerate design cycles and enhance comprehensive system performance. The incorporation of state-of-the-art caching mechanisms and optimized data flow ensures the platform’s ability to handle large datasets efficiently.

SEMIFIVE
Samsung
5nm, 12nm, 14nm
AI Processor, Cell / Packet, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Polar ID Biometric Security System

Polar ID from Metalenz offers a cutting-edge face unlock solution, using advanced meta-optic technology to provide secure, high-resolution facial recognition capabilities. It captures the unique "polarization signature" of a human face, making it resistant to both 2D photos and sophisticated 3D masks. Polar ID operates efficiently in a variety of lighting conditions, from bright daylight to dark environments, ensuring its utility extends across all smartphone models without sacrificing security or user experience. This technology replaces complex structured light modules, incorporating a single near-infrared polarization camera and active illumination source. It significantly reduces costs and footprint, supporting a broad adoption across hundreds of millions of mobile devices. With its low price point and high performance, Polar ID elevates smartphone security, offering robust protection for digital transactions and identity verification. By enabling this on an embedded platform with compatibility for Qualcomm's Snapdragon processors, Metalenz ensures widespread applicability. The key advantage of Polar ID is its affordability and ease of integration, as it eliminates the need for larger, more intrusive notches in phone designs. Its sophisticated polarization sensing means secure authentication is possible even if the user wears sunglasses or masks. Polar ID sets a new benchmark in smartphone security by delivering convenience and enhanced protection, marking it as the first polarization sensor available for smartphones.

Metalenz Inc.
13 Categories
View Details

RISCV SoC - Quad Core Server Class

The RISCV SoC developed by Dyumnin Semiconductors is engineered with a 64-bit quad-core server-class RISCV CPU, aiming to bridge various application needs with an integrated, holistic system design. Each subsystem of this SoC, from AI/ML capabilities to automotive and multimedia functionalities, is constructed to deliver optimal performance and streamlined operations. Designed as a reference model, this SoC enables quick adaptation and deployment, significantly reducing the time-to-market for clients. The AI Accelerator subsystem enhances AI operations with its collaboration of a custom central processing unit, intertwined with a specialized tensor flow unit. In the multimedia domain, the SoC boasts integration capabilities for HDMI, Display Port, MIPI, and other advanced graphic and audio technologies, ensuring versatile application across various multimedia requirements. Memory handling is another strength of this SoC, with support for protocols ranging from DDR and MMC to more advanced interfaces like ONFI and SD/SDIO, ensuring seamless connectivity with a wide array of memory modules. Moreover, the communication subsystem encompasses a broad spectrum of connectivity protocols, including PCIe, Ethernet, USB, and SPI, crafting an all-rounded solution for modern communication challenges. The automotive subsystem, offering CAN and CAN-FD protocols, further extends its utility into automotive connectivity.

Dyumnin Semiconductors
28 Categories
View Details

Dynamic Neural Accelerator II Architecture

The Dynamic Neural Accelerator II (DNA-II) Architecture by EdgeCortix represents a leap in neural network processing capabilities, designed to yield exceptional parallelism and efficiency. It employs a runtime reconfigurable architecture that allows data paths to be reconfigured on-the-fly, maximizing parallelism and minimizing memory bandwidth usage on-chip. The DNA-II core can power AI applications across both convolutional and transformer networks, making it adaptable for a range of edge applications. Its scalable design, beginning from 1K MACs, facilitates flexible integration into SOC environments, while supporting a variety of target applications. It essentially serves as the powerhouse for the SAKURA-II AI Accelerator, enabling high-performance processing in compact form factors. Through the MERA software stack, DNA-II optimizes how network tasks are ordered and resources are allocated, providing precise scheduling and reducing inefficiencies found in other architectures. Additionally, the DNA-II features efficient energy consumption metrics, critical for edge implementations where performance must be balanced with power constraints.

EdgeCortix Inc.
AI Processor, Processor Core Independent, Vision Processor, Wireless Processor
View Details

ELFIS2 Image Sensor

The ELFIS2 Image Sensor is a sophisticated development from Caeleste tailored for advanced imaging applications. It is designed to offer unparalleled image fidelity across a plethora of environments, making it an indispensable tool for both scientific and space missions. This image sensor excels in capturing high contrast and high detail images, even under challenging conditions such as low light or rapidly changing brightness.\n\nELFIS2 features state-of-the-art image processing capabilities, combined with robust construction to withstand the rigors of space missions. The sensor is optimized to operate efficiently with minimal power consumption while delivering high-resolution images, ensuring that mission data is both accurate and reliable. The sensor's design also facilitates ease of integration into complex systems, providing a seamless fit for advanced imaging needs.\n\nCaeleste's expertise ensures that the ELFIS2 sensor is equipped with the latest in sensor technology, making it suitable for a variety of applications ranging from astronomy to industrial monitoring. Whether deployed in outer space or earthbound observation platforms, the ELFIS2 Image Sensor proves to be a remarkable blend of technology and craftsmanship.

Caeleste
LFoundry, Samsung
65nm, 500nm
A/D Converter, Analog Front Ends, Analog Subsystems, GPU, Graphics & Video Modules, LCD Controller, Oversampling Modulator, Photonics, Sensor, Temperature Sensor, Vision Processor
View Details

aiData

aiData is designed to streamline the data pipeline for developing models for Advanced Driver-Assistance Systems and Automated Driving solutions. This automated system provides a comprehensive method of managing and processing data, from collection through curation, annotation, and validation. It significantly reduces the time required for data processing by automating many labor-intensive tasks, enabling teams to focus more on development rather than data preparation. The aiData platform includes sophisticated tools for recording, managing, and annotating data, ensuring accuracy and traceability through all stages of the MLOps workflow. It supports the creation of high-quality training datasets, essential for developing reliable and effective AI models. The platform's capabilities extend beyond basic data processing by offering advanced features such as versioning and metrics analysis, allowing users to track data changes over time and evaluate dataset quality before training. The aiData Recorder feature ensures high-quality data collection tailored to diverse sensor configurations, while the Auto Annotator quickly processes data for a variety of objects using AI algorithms, delivering superior precision levels. These features are complemented by aiData Metrics, which provide valuable insights into dataset completeness and adequacy in covering expected operational domains. With seamless on-premise or cloud deployment options, aiData empowers global automotive teams to collaborate efficiently, offering all necessary tools for a complete data management lifecycle. Its integration versatility supports a wide array of applications, helping improve the speed and effectiveness of deploying ADAS models.

aiMotive
AI Processor, AMBA AHB / APB/ AXI, Audio Interfaces, Content Protection Software, Digital Video Broadcast, Embedded Memories, H.264, Processor Core Dependent, Vision Processor
View Details

RISC-V CPU IP NA Class

Specially engineered for the automotive industry, the NA Class IP by Nuclei complies with the stringent ISO26262 functional safety standards. This processor is crafted to handle complex automotive applications, offering flexibility and rigorous safety protocols necessary for mission-critical transportation technologies. Incorporating a range of functional safety features, the NA Class IP is equipped to ensure not only performance but also reliability and safety in high-stakes vehicular environments.

Nuclei System Technology
AI Processor, CAN-FD, CPU, Cryptography Cores, FlexRay, Microcontroller, Platform Security, Processor Core Dependent, Processor Cores, Security Processor, Vision Processor
View Details

Prodigy Universal Processor

Tachyum's Prodigy Universal Processor marks a significant milestone as it combines the functionalities of Central Processing Units (CPUs), General-Purpose Graphics Processing Units (GPGPUs), and Tensor Processing Units (TPUs) into a single cohesive architecture. This groundbreaking design is tailored to meet the escalating demands of artificial intelligence, high-performance computing, and hyperscale data centers by offering unparalleled performance, energy efficiency, and high utilization rates. The Prodigy processor not only tackles common data center challenges like elevated power consumption and stagnating processor performance but also offers a robust solution to enhance server utilization and reduce the carbon footprint of massive computational installations. Notably, it thrives on a simplified programming model grounded in coherent multiprocessor architecture, thereby enabling seamless execution of an array of AI disciplines like Explainable AI, Bio AI, and deep machine learning within a single hardware platform.

Tachyum Inc.
13 Categories
View Details

CTAccel Image Processor on Intel Agilex FPGA

The CTAccel Image Processor on Intel Agilex FPGA is designed to handle high-performance image processing by capitalizing on the robust capabilities of Intel's Agilex FPGAs. These FPGAs, leveraging the 10 nm SuperFin process technology, are ideal for applications demanding high performance, power efficiency, and compact sizes. Featuring advanced DSP blocks and high-speed transceivers, this IP thrives in accelerating image processing tasks that are typically computational-intensive when executed on CPUs. One of the main advantages is its ability to significantly enhance image processing throughput, achieving up to 20 times the speed while maintaining reduced latency. This performance prowess is coupled with low power consumption, leading to decreased operational and maintenance costs due to fewer required server instances. Additionally, the solution is fully compatible with mainstream image processing software, facilitating seamless integration and leveraging existing software investments. The adaptability of the FPGA allows for remote reconfiguration, ensuring that the IP can be tailored to specific image processing scenarios without necessitating a server reboot. This ease of maintenance, combined with a substantial boost in compute density, underscores the IP's suitability for high-demand image processing environments, such as those encountered in data centers and cloud computing platforms.

CTAccel Ltd.
Intel Foundry
12nm
AI Processor, DLL, Graphics & Video Modules, Image Conversion, JPEG, JPEG 2000, Processor Core Independent, Vision Processor
View Details

Trifecta-GPU

Trifecta-GPU design offers an exceptional computational power utilizing the NVIDIA RTX A2000 embedded GPU. With a focus on modular test and measurement, and electronic warfare markets, this GPU is capable of delivering 8.3 FP32 TFLOPS compute performance. It is tailored for advanced signal processing and machine learning, making it indispensable for modern, software-defined signal processing applications. This GPU is a part of the COTS PXIe/CPCIe modular family, known for its flexibility and ease of use. The NVIDIA GPU integration means users can expect robust performance for AI inference applications, facilitating quick deployment in various scenarios requiring advanced data processing. Incorporating the latest in graphical performance, the Trifecta-GPU supports a broad range of applications, from high-end computing tasks to graphics-intensive processes. It is particularly beneficial for those needing a reliable and powerful GPU for modular T&M and EW projects.

RADX Technologies, Inc.
AI Processor, CPU, DSP Core, GPU, Multiprocessor / DSP, Peripheral Controller, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

SiFive Performance

The SiFive Performance family is dedicated to offering high-throughput, low-power processor solutions, suitable for a wide array of applications from data centers to consumer devices. This family includes a range of 64-bit, out-of-order cores configured with options for vector computations, making it ideal for tasks that demand significant processing power alongside efficiency. Performance cores provide unmatched energy efficiency while accommodating a breadth of workload requirements. Their architecture supports up to six-wide out-of-order processing with tailored options that include multiple vector engines. These cores are designed for flexibility, enabling various implementations in consumer electronics, network storage solutions, and complex multimedia processing. The SiFive Performance family facilitates a mix of high performance and low power usage, allowing users to balance the computational needs with power consumption effectively. It stands as a testament to SiFive’s dedication to enabling flexible tech solutions by offering rigorous processing capabilities in compact, scalable packages.

SiFive, Inc.
CPU, DSP Core, Ethernet, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, Wireless Processor
View Details

RISC-V CPU IP NS Class

The NS Class is Nuclei's crucial offering for applications prioritizing security and fintech solutions. This RISC-V CPU IP securely manages IoT environments with its highly customizable and secure architecture. Equipped to support advanced security protocols and functional safety features, the NS Class is particularly suited for payment systems and other fintech applications, ensuring robust protection and reliable operations. Its design follows the RISC-V standards and is accompanied by customizable configuration options tailored to meet specific security requirements.

Nuclei System Technology
CPU, Cryptography Cores, Embedded Security Modules, Microcontroller, Platform Security, Processor Cores, Security Processor, Security Subsystems, Vision Processor
View Details

RISC-V CPU IP NX Class

The NX Class RISC-V CPU IP by Nuclei is characterized by its 64-bit architecture, making it a robust choice for storage, AR/VR, and AI applications. This processing unit is designed to accommodate high data throughput and demanding computational tasks. By leveraging advanced capabilities, such as virtual memory and enhanced processing power, the NX Class facilitates cutting-edge technological applications and is adaptable for integration into a vast array of high-performance systems.

Nuclei System Technology
Building Blocks, CPU, DSP Core, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Vision Processor, Wireless Processor
View Details

eSi-3264

The eSi-3264 stands out with its support for both 32/64-bit operations, including 64-bit fixed and floating-point SIMD (Single Instruction Multiple Data) DSP extensions. Engineered for applications mandating DSP functionality, it does so with minimal silicon footprint. Its comprehensive instruction set includes specialized commands for various tasks, bolstering its practicality across multiple sectors.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

RISC-V CPU IP NI Class

The NI Class RISC-V CPU IP caters to communication, video processing, and AI applications, providing a balanced architecture for intensive data handling and processing capabilities. With a focus on high efficiency and flexibility, this processor supports advanced data crunching and networking applications, ensuring that systems run smoothly and efficiently even when managing complex algorithms. The NI Class upholds Nuclei's commitment to providing versatile solutions in the evolving tech landscape.

Nuclei System Technology
3GPP-LTE, AI Processor, CPU, Cryptography Cores, Microcontroller, Processor Core Dependent, Processor Cores, Security Processor, Vision Processor
View Details

Origin E1

The Origin E1 is a streamlined neural processing unit designed specifically for always-on applications in personal electronics and smart devices such as smartphones and security systems. This processor focuses on delivering highly efficient AI performance, achieving around 18 TOPS per watt. With its low power requirements, the E1 is ideally suited for tasks demanding continuous data sampling, such as camera operations in smart surveillance systems where it runs on less than 20mW of power. Its packet-based architecture ensures efficient resource utilization, maintaining high performance with lower power and area consumption. The E1's adaptability is enhanced through customizable options, allowing it to meet specific PPA requirements effectively, making it the go-to choice for applications seeking to improve user privacy and experience by minimizing external memory use.

Expedera
14 Categories
View Details

Vega eFPGA

The Vega eFPGA is a flexible programmable solution crafted to enhance SoC designs with substantial ease and efficiency. This IP is designed to offer multiple advantages such as increased performance, reduced costs, secure IP handling, and ease of integration. The Vega eFPGA boasts a versatile architecture allowing for tailored configurations to suit varying application requirements. This IP includes configurable tiles like CLB (Configurable Logic Blocks), BRAM (Block RAM), and DSP (Digital Signal Processing) units. The CLB part includes eight 6-input Lookup Tables that provide dual outputs, and also an optional configuration with a fast adder having a carry chain. The BRAM supports 36Kb dual-port memory and offers flexibility for different configurations, while the DSP component is designed for complex arithmetic functions with its 18x20 multipliers and a wide 64-bit accumulator. Focused on allowing easy system design and acceleration, Vega eFPGA ensures seamless integration and verification into any SoC design. It is backed by a robust EDA toolset and features that allow significant customization, making it adaptable to any semiconductor fabrication process. This flexibility and technological robustness places the Vega eFPGA as a standout choice for developing innovative and complex programmable logic solutions.

Rapid Silicon
CPU, Embedded Memories, Multiprocessor / DSP, Processor Core Independent, Vision Processor, WMV
View Details

Neural Network Accelerator

The Neural Network Accelerator by Gyrus AI is an advanced compute solution specially optimized for neural network applications. It features native graph processing capabilities that significantly enhance the computational efficiency of AI models. This IP component supports high-speed operations with 30 TOPS/W, offering exceptional performance that significantly reduces the clock cycles typically required by other systems.\n\nMoreover, the architecture is designed to consume 10-20 times less power, benefitting from a low-memory usage configuration. This efficiency is further highlighted by the IP’s ability to achieve an 80% utilization rate across various model structures, which translates into significant reductions in die area requirements up to 8 to 10 times smaller than conventional designs.\n\nGyrus AI’s Neural Network Accelerator also supports seamless integration of software tools tailored to run neural networks on the platform, making it a practical choice for edge computing applications. It not only supports large-scale AI computations but also minimizes power consumption and space constraints, making it ideal for high-performance environments.

Gyrus AI
AI Processor, Coprocessor, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor, Vision Processor
View Details

SoC Platform

The SoC Platform by SEMIFIVE enables swift and minimal-effort design of system-on-chip solutions through their streamlined platforms. Built with silicon-proven IPs and optimized methodologies, these platforms significantly reduce costs and risks while ensuring a faster turnaround time. The platform supports domain-specific architectures and offers a pre-configured and verified IP pool, facilitating quick hardware and software bring-up. This platform stands out for its ability to turn ideas into silicon by leveraging SEMIFIVE’s infrastructure and IP partnerships. It promises substantial cost reduction in areas like design NRE, fabrication, and IP licenses, offering savings upwards of 50% compared to industry norms. Its rapid development process is poised to cut development times in half, maintaining high levels of design and verification reusability. The SoC Platform also minimizes engineering risks associated with the complexities of cutting-edge process technologies. By utilizing pre-verified platform IP pools and silicon-proven design components, SEMIFIVE offers a highly reliable and efficient path from concept to silicon production.

SEMIFIVE
Samsung
5nm, 12nm, 14nm
15 Categories
View Details

Pipelined FFT

The Pipelined FFT core delivers streamlined continuous data processing capabilities with an architecture designed for pipelined execution of FFT computations. This core is perfectly suited for use in environments where data is fed continuously and needs to be processed with minimal delays. Its design minimizes memory footprint while ensuring high-speed data throughput, making it invaluable for real-time signal processing applications. By structurally arranging computations into a pipeline, the core facilitates a seamless flow of operations, allowing for one-step-after-another processing of data. The efficiency of the pipelining process reduces the system's overall latency, ensuring that data is processed as quickly as it arrives. This functionality is especially beneficial in time-sensitive applications where downtime can impact system performance. The compact design of the Pipelined FFT core integrates well into systems requiring consistent data flow and reduced resource allocation. It offers effective management of continuous data streams, supporting critical applications in areas such as real-time monitoring and control systems. By ensuring rapid data turnover, this core enhances system efficiency and contributes significantly to achieving strategic processing objectives.

Dillon Engineering, Inc.
Intel Foundry, Samsung
90nm, 800nm
Coprocessor, Ethernet, Network on Chip, Receiver/Transmitter, Vision Processor
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

No credit card or payment details required.

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt