All IPs > Processor > Coprocessor
In the realm of modern computing, coprocessor semiconductor IPs play a crucial role in augmenting system capabilities. A coprocessor is a supplementary processor that executes specific tasks more efficiently than the primary central processing unit (CPU). These coprocessors are specialized semiconductor IPs utilized in devices requiring enhanced computational power for particular functions such as graphics rendering, encryption, mathematical calculations, and artificial intelligence (AI) processing.
Coprocessors are integral in sectors where high performance and efficiency are paramount. For instance, in the gaming industry, a graphics processing unit (GPU) acts as a coprocessor to handle the high demand for rendering visuals, thus alleviating the burden from the CPU. Similarly, AI accelerators in smartphones and servers offload intensive AI computation tasks to speed up processing while conserving power.
You will find various coprocessor semiconductor IP products geared toward enhancing computational specialization. These include digital signal processors (DSPs) for processing real-time audio and video signals and hardware encryption coprocessors for securing data transactions. With the rise in machine learning applications, tensor processing units (TPUs) have become invaluable, offering massively parallel computing to efficiently manage AI workloads.
By incorporating these coprocessor semiconductor IPs into a system design, manufacturers can achieve remarkable improvements in speed, power efficiency, and processing power. This enables the development of cutting-edge technology products across a range of fields from personal electronics to autonomous vehicles, ensuring optimal performance in specialized computing tasks.
Akida's Neural Processor IP represents a leap in AI architecture design, tailored to provide exceptional energy efficiency and processing speed for an array of edge computing tasks. At its core, the processor mimics the synaptic activity of the human brain, efficiently executing tasks that demand high-speed computation and minimal power usage. This processor is equipped with configurable neural nodes capable of supporting innovative AI frameworks such as convolutional and fully-connected neural network processes. Each node accommodates a range of MAC operations, enhancing scalability from basic to complex deployment requirements. This scalability enables the development of lightweight AI solutions suited for consumer electronics as well as robust systems for industrial use. Onboard features like event-based processing and low-latency data communication significantly decrease the strain on host processors, enabling faster and more autonomous system responses. Akida's versatile functionality and ability to learn on the fly make it a cornerstone for next-generation technology solutions that aim to blend cognitive computing with practical, real-world applications.
The Akida IP is a groundbreaking neural processor designed to emulate the cognitive functions of the human brain within a compact and energy-efficient architecture. This processor is specifically built for edge computing applications, providing real-time AI processing for vision, audio, and sensor fusion tasks. The scalable neural fabric, ranging from 1 to 128 nodes, features on-chip learning capabilities, allowing devices to adapt and learn from new data with minimal external inputs, enhancing privacy and security by keeping data processing localized. Akida's unique design supports 4-, 2-, and 1-bit weight and activation operations, maximizing computational efficiency while minimizing power consumption. This flexibility in configuration, combined with a fully digital neuromorphic implementation, ensures a cost-effective and predictable design process. Akida is also equipped with event-based acceleration, drastically reducing the demands on the host CPU by facilitating efficient data handling and processing directly within the sensor network. Additionally, Akida's on-chip learning supports incremental learning techniques like one-shot and few-shot learning, making it ideal for applications that require quick adaptation to new data. These features collectively support a broad spectrum of intelligent computing tasks, including object detection and signal processing, all performed at the edge, thus eliminating the need for constant cloud connectivity.
MetaTF is BrainChip's premier development tool platform designed to complement its neuromorphic technology solutions. This platform is a comprehensive toolkit that empowers developers to convert and optimize standard machine learning models into formats compatible with BrainChip's Akida technology. One of its key advantages is its ability to adjust models into sparse formats, enhancing processing speed and reducing power consumption. The MetaTF framework provides an intuitive interface for integrating BrainChip’s specialized AI capabilities into existing workflows. It supports streamlined adaptation of models to ensure they are optimized for the unique characteristics of neuromorphic processing. Developers can utilize MetaTF to rapidly iterate and refine AI models, making the deployment process smoother and more efficient. By providing direct access to pre-trained models and tuning mechanisms, MetaTF allows developers to capitalize on the benefits of event-based neural processing with minimal configuration effort. This platform is crucial for advancing the application of machine learning across diverse fields such as IoT devices, healthcare technology, and smart infrastructure.
The xcore.ai platform by XMOS is a versatile, high-performance microcontroller designed for the integration of AI, DSP, and real-time I/O processing. Focusing on bringing intelligence to the edge, this platform facilitates the construction of entire DSP systems using software without the need for multiple discrete chips. Its architecture is optimized for low-latency operation, making it suitable for diverse applications from consumer electronics to industrial automation. This platform offers a robust set of features conducive to sophisticated computational tasks, including support for AI workloads and enhanced control logic. The xcore.ai platform streamlines development processes by providing a cohesive environment that blends DSP capabilities with AI processing, enabling developers to realize complex applications with greater efficiency. By doing so, it reduces the complexity typically associated with chip integration in advanced systems. Designed for flexibility, xcore.ai supports a wide array of applications across various markets. Its ability to handle audio, voice, and general-purpose processing makes it an essential building block for smart consumer devices, industrial control systems, and AI-powered solutions. Coupled with comprehensive software support and development tools, the xcore.ai ensures a seamless integration path for developers aiming to push the boundaries of AI-enabled technologies.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
Altek's 3D Imaging Chip is a breakthrough in the field of vision technology. Designed with an emphasis on depth perception, it enhances the accuracy of 3D scene capturing, making it ideal for applications requiring precise distance gauging such as autonomous vehicles and drones. The chip integrates seamlessly within complex systems, boasting superior recognition accuracy that ensures reliable and robust performance. Building upon years of expertise in 3D imaging, this chip supports multiple 3D modes, offering flexible solutions for devices from surveillance robots to delivery mechanisms. It facilitates medium-to-long-range detection needs thanks to its refined depth sensing capabilities. Altek's approach ensures a comprehensive package from modular design to chip production, creating a cohesive system that marries both hardware and software effectively. Deployed within various market segments, it delivers adaptable image solutions with dynamic design agility. Its imaging prowess is further enhanced by state-of-the-art algorithms that refine image quality and facilitate facial detection and recognition, thereby expanding its utility across diverse domains.
The Spiking Neural Processor T1 is designed as a highly efficient microcontroller that integrates neuromorphic intelligence closely with sensors. It employs a unique spiking neural network engine paired with a nimble RISC-V processor core, forming a cohesive unit for advanced data processing. With this setup, the T1 excels in delivering next-gen AI capabilities embedded directly at the sensor, operating within an exceptionally low power consumption range, ideal for battery-dependent and latency-sensitive applications. This processor marks a notable advancement in neuromorphic technology, allowing for real-time pattern recognition with minimal power draw. It supports various interfaces like QSPI, I2C, and UART, fitting into a compact 2.16mm x 3mm package, which facilitates easy integration into diverse electronic devices. Additionally, its architecture is designed to process different neural network models efficiently, from spiking to deep neural networks, providing versatility across applications. The T1 Evaluation Kit furthers this ease of adoption by enabling developers to use the Talamo SDK to create or deploy applications readily. It includes tools for performance profiling and supports numerous common sensors, making it a strong candidate for projects aiming to leverage low-power, intelligent processing capabilities. This innovative chip's ability to manage power efficiency with high-speed pattern processing makes it especially suitable for advanced sensing tasks found in wearables, smart home devices, and more.
The Maverick-2 Intelligent Compute Accelerator (ICA) by Next Silicon represents a transformative leap in high-performance compute architecture. It seamlessly integrates into HPC systems with a pioneering software-defined approach that dynamically optimizes hardware configurations based on real-time application demands. This enables high efficiency and unparalleled performance across diverse workloads including HPC, AI, and other data-intensive applications. Maverick-2 harnesses a 5nm process technology, utilizing HBM3E memory for enhanced data throughput and efficient energy usage.\n\nBuilt with developers in mind, Maverick-2 supports an array of programming languages such as C/C++, FORTRAN, and OpenMP without the necessity for proprietary stacks. This flexibility not only mitigates porting challenges but significantly reduces development time and costs. A distinguishing feature of Maverick-2 is its real-time telemetry capabilities that provide valuable insights into performance metrics, allowing for refined optimizations during execution.\n\nThe architecture supports versatile interfaces such as PCIe Gen 5 and offers configurations that accommodate complex workloads using either single or dual-die setups. Its intelligent algorithms autonomously identify computational bottlenecks to enhance throughput and scalability, thus future-proofing investments as computing demands evolve. Maverick-2's utility spans various sectors including life sciences, energy, and fintech, underlining its adaptability and high-performance capabilities.
The 2D FFT core is designed to efficiently handle two-dimensional FFT processing, ideal for applications in image and video processing where data is inherently two-dimensional. This core is engineered to integrate both internal and external memory configurations, which optimize data handling for complex multimedia processing tasks, ensuring a high level of performance is maintained throughout. Utilizing sophisticated algorithms, the 2D FFT core processes data through two FFT engines. This dual approach maximizes throughput, typically limiting bottlenecks to memory bandwidth constraints rather than computational delays. This efficiency is critical for applications handling large volumes of multimedia data where real-time processing is a requisite. The capacity of the 2D FFT core to adapt to varying processing environments marks its versatility in the digital processing landscape. By ensuring robust data processing capabilities, it addresses the challenges of dynamic data movement, providing the reliability necessary for multimedia systems. Its strategic design supports the execution of intensive computational tasks while maintaining the operational flow integral to real-time applications.
Syntacore’s SCR9 processor core stands out as a powerful force in handling high-performance computing tasks with its dual-issue out-of-order 12-stage pipeline. This core is engineered for environments that demand peak computational ability and robust pipeline execution, crucial for data-intense tasks such as AI and ML, enterprise applications, and network processing. The architecture is tailored to support extensive multicore and heterogeneous configurations, providing valuable tools for developers aiming to maximize workload efficiency and processing speed. The inclusion of a vector processing unit (VPU) underscores its capability to handle large datasets and complex calculations, while maintaining system integrity and coherence through its comprehensive cache management. With support for hypervisor functionalities and scalable Linux environments, the SCR9 continues to be a key strategic element in expanding the horizons of RISC-V-based applications. Syntacore’s extensive library of development resources further enriches the usability of this core, ensuring that its implementation remains smooth and effective across diverse technological landscapes.
The RISCV SoC developed by Dyumnin Semiconductors is engineered with a 64-bit quad-core server-class RISCV CPU, aiming to bridge various application needs with an integrated, holistic system design. Each subsystem of this SoC, from AI/ML capabilities to automotive and multimedia functionalities, is constructed to deliver optimal performance and streamlined operations. Designed as a reference model, this SoC enables quick adaptation and deployment, significantly reducing the time-to-market for clients. The AI Accelerator subsystem enhances AI operations with its collaboration of a custom central processing unit, intertwined with a specialized tensor flow unit. In the multimedia domain, the SoC boasts integration capabilities for HDMI, Display Port, MIPI, and other advanced graphic and audio technologies, ensuring versatile application across various multimedia requirements. Memory handling is another strength of this SoC, with support for protocols ranging from DDR and MMC to more advanced interfaces like ONFI and SD/SDIO, ensuring seamless connectivity with a wide array of memory modules. Moreover, the communication subsystem encompasses a broad spectrum of connectivity protocols, including PCIe, Ethernet, USB, and SPI, crafting an all-rounded solution for modern communication challenges. The automotive subsystem, offering CAN and CAN-FD protocols, further extends its utility into automotive connectivity.
The DisplayPort Transmitter from Trilinear Technologies is a sophisticated solution designed for high-performance digital video streaming applications. It is compliant with the latest VESA DisplayPort standards, ensuring compatibility and seamless integration with a wide range of display devices. This transmitter core supports high-resolution video outputs and is equipped with advanced features like adaptive sync and panel refresh options, making it ideal for consumer electronics, automotive displays, and professional AV systems. This IP core provides reliable performance with minimal power consumption, addressing the needs of modern digital ecosystems where energy efficiency is paramount. It includes customizable settings for audio and video synchronization, ensuring optimal output quality and user experience across different devices and configurations. By reducing load on the system processor, the DisplayPort Transmitter guarantees a seamless streaming experience even in high-demand environments. In terms of integration, Trilinear's DisplayPort Transmitter is supported with comprehensive software stacks allowing for easy customization and deployment. This ensures rapid product development cycles and aids developers in managing complex video data streams effectively. The transmitter is particularly optimized for use in embedded systems and consumer devices, offering robust performance capabilities that stand up to rigorous real-time application demands. With a focus on compliance and testing, the DisplayPort Transmitter is pre-tested and proven to work seamlessly with a variety of hardware platforms including FPGA and ASIC technologies. This robustness in design and functionality underlines Trilinear's reputation for delivering reliable, high-quality semiconductor IP solutions that cater to diverse industrial applications.
The Veyron V1 is a high-performance RISC-V CPU designed to meet the rigorous demands of modern data centers and compute-intensive applications. This processor is tailored for cloud environments requiring extensive compute capabilities, offering substantial power efficiency while optimizing processing workloads. It provides comprehensive architectural support for virtualization and efficient task management with its robust feature set. Incorporating advanced RISC-V standards, the Veyron V1 ensures compatibility and scalability across a wide range of industries, from enterprise servers to high-performance embedded systems. Its architecture is engineered to offer seamless integration, providing an excellent foundation for robust, scalable computing designs. Equipped with state-of-the-art processing cores and enhanced vector acceleration, the Veyron V1 delivers unmatched throughput and performance management, making it suitable for use in diverse computing environments.
The iCan PicoPop® is a highly compact System on Module (SOM) based on the Zynq UltraScale+ MPSoC from Xilinx, suited for high-performance embedded applications in aerospace. Known for its advanced signal processing capabilities, it is particularly effective in video processing contexts, offering efficient data handling and throughput. Its compact size and performance make it ideal for integration into sophisticated systems where space and performance are critical.
Trilinear Technologies has developed a cutting-edge DisplayPort Receiver that enhances digital connectivity, offering robust video reception capabilities necessary for today's high-definition video systems. Compliant with VESA standards, the receiver supports the latest DisplayPort specifications, effortlessly handling high-bandwidth video data necessary for applications such as ultra-high-definition televisions, professional video wall setups, and complex automotive display systems. The DisplayPort Receiver is designed with advanced features that facilitate seamless video data acquisition and processing, including multi-stream transport capabilities for handling multiple video streams concurrently. This is particularly useful in professional display settings where multiple input sources are needed. The core also incorporates adaptive sync features, which help reduce screen tearing and ensure smooth video playback, enhancing user experience significantly. An important facet of the DisplayPort Receiver is its low latency and high-efficiency operations, crucial for systems requiring real-time data processing. Trilinear's receiver core ensures that video data is processed with minimal delay, maintaining the integrity and fidelity of the original visual content. This makes it a preferred choice for high-performance applications in sectors like gaming, broadcasting, and high-definition video conferencing. To facilitate integration and ease of use, the DisplayPort Receiver is supported by a comprehensive suite of development tools and software packages. This makes the deployment process straightforward, allowing developers to integrate the receiver into both FPGA and ASIC environments with minimal adjustments. Its scalability and flexibility mean it can meet the demands of a wide range of applications, solidifying Trilinear Technologies' position as a leader in the field of semiconductor IP solutions.
The RFicient chip is designed for the Internet of Things (IoT) applications, famously recognized for its ultra-low-power operations. It aims to innovate the IoT landscape by offering a highly efficient receiver technology that significantly reduces power consumption. This chip supports energy harvesting to ensure sustainable operation and contributes to green IoT development by lessening the dependency on traditional power sources. Functionally, the RFicient chip enhances IoT devices' performance by providing cutting-edge reception capabilities, which allow for the consistent and reliable transmission of data across varied environments. This robustness makes it ideal for applications in industrial IoT settings, including smart cities and agricultural monitoring, where data integrity and longevity are crucial. Technically advanced, the RFicient chip's architecture employs intelligent design strategies that leverage low-latency responses in data processing, making it responsive and adaptable to rapid changes in its operational environment. These characteristics position it as a versatile solution for businesses aiming to deploy IoT networks with minimal environmental footprint and extended operational lifespan.
The RISC-V Hardware-Assisted Verification by Bluespec is a high-performance platform designed for swift and precise verification of RISC-V cores. It supports testing at both the core level (ISA) and system level, accommodating RTOS and Linux-based environments. This solution can verify standard ISA extensions, custom ISA extensions, and integrated accelerators, making it a versatile tool for various verification needs. One of the standout features of this platform is its scalability and accessibility via the AWS cloud, which ensures that resources can be tapped into as needed, enabling efficient verification anytime, anywhere. Such scalability is crucial for teams that require the flexibility to test various designs without being confined to local server limitations. With an emphasis on broad compatibility, the RISC-V Hardware-Assisted Verification platform is ideal for those involved in developing RISC-V based systems. It assists developers in ensuring their designs are accurate and reliable before deployment, reducing errors and speeding up time-to-market.
DolphinWare IPs is a versatile portfolio of intellectual property solutions that enable efficient SoC design. This collection includes various control logic components such as FIFO, arbiter, and arithmetic components like math operators and converters. In addition, the logic components span counters, registers, and multiplexers, providing essential functionalities for diverse industrial applications. The IPs in this lineup are meticulously designed to ensure data integrity, supported by robust verification IPs for AXI4, APB, SD4.0, and more. This comprehensive suite meets the stringent demands of modern electronic designs, facilitating seamless integration into existing design paradigms. Beyond their broad functionality, DolphinWare’s offerings are fundamental to applications requiring specific control logic and data integrity solutions, making them indispensable for enterprises looking to modernize or expand their product offerings while ensuring compliance with industry standards.
The Origin E1 is a streamlined neural processing unit designed specifically for always-on applications in personal electronics and smart devices such as smartphones and security systems. This processor focuses on delivering highly efficient AI performance, achieving around 18 TOPS per watt. With its low power requirements, the E1 is ideally suited for tasks demanding continuous data sampling, such as camera operations in smart surveillance systems where it runs on less than 20mW of power. Its packet-based architecture ensures efficient resource utilization, maintaining high performance with lower power and area consumption. The E1's adaptability is enhanced through customizable options, allowing it to meet specific PPA requirements effectively, making it the go-to choice for applications seeking to improve user privacy and experience by minimizing external memory use.
The Neural Network Accelerator by Gyrus AI is an advanced compute solution specially optimized for neural network applications. It features native graph processing capabilities that significantly enhance the computational efficiency of AI models. This IP component supports high-speed operations with 30 TOPS/W, offering exceptional performance that significantly reduces the clock cycles typically required by other systems.\n\nMoreover, the architecture is designed to consume 10-20 times less power, benefitting from a low-memory usage configuration. This efficiency is further highlighted by the IP’s ability to achieve an 80% utilization rate across various model structures, which translates into significant reductions in die area requirements up to 8 to 10 times smaller than conventional designs.\n\nGyrus AI’s Neural Network Accelerator also supports seamless integration of software tools tailored to run neural networks on the platform, making it a practical choice for edge computing applications. It not only supports large-scale AI computations but also minimizes power consumption and space constraints, making it ideal for high-performance environments.
The Pipelined FFT core delivers streamlined continuous data processing capabilities with an architecture designed for pipelined execution of FFT computations. This core is perfectly suited for use in environments where data is fed continuously and needs to be processed with minimal delays. Its design minimizes memory footprint while ensuring high-speed data throughput, making it invaluable for real-time signal processing applications. By structurally arranging computations into a pipeline, the core facilitates a seamless flow of operations, allowing for one-step-after-another processing of data. The efficiency of the pipelining process reduces the system's overall latency, ensuring that data is processed as quickly as it arrives. This functionality is especially beneficial in time-sensitive applications where downtime can impact system performance. The compact design of the Pipelined FFT core integrates well into systems requiring consistent data flow and reduced resource allocation. It offers effective management of continuous data streams, supporting critical applications in areas such as real-time monitoring and control systems. By ensuring rapid data turnover, this core enhances system efficiency and contributes significantly to achieving strategic processing objectives.
Avispado is a 64-bit in-order RISC-V core designed by Semidynamics, focused on offering efficiency for AI edge applications and embedded systems. Its in-order execution model ensures optimal power consumption, making it a prudent choice for energy-constrained environments such as IoT and mobile devices. Equipped with Gazzillion Misses™ technology, Avispado effectively manages memory requests for efficient data handling, crucial for processing real-time data streams in embedded setups. This core supports the RISC-V Vector Specification 1.0, making it an excellent fit for machine learning tasks that require vector processing capabilities. Avispado also features customizable components like branch predictors and configurable instruction/data caches, which can be tailored to fit specific application needs. Its multiprocessor ready design is scalable for systems demanding cache coherence and reliable data access, supporting a wide range of AI and computation-heavy applications.
The UltraLong FFT core is specifically optimized for Xilinx FPGAs, designed to handle extensive data processing tasks with efficiency. With an architecture that accommodates large-scale FFT applications, this core is engineered to maximize throughput while minimizing memory usage. Ideal for creating high-speed data processing pipelines, the UltraLong FFT core supports advanced signal processing with unparalleled speed and accuracy, providing a reliable solution for real-time applications that demand robust performance. This FFT core integrates seamlessly with external memory systems, utilizing dual FFT engines to achieve maximum data throughput, which is typically constrained only by the bandwidth of the memory. The two FFT engines operate in tandem, allowing for rapid data computation, making it perfect for high-end computation needs. Additionally, the design's flexibility allows for easy adaptation to various signal processing demands, ensuring it meets the specific requirements of different applications. The UltraLong FFT core's design is this finely tuned integration capability, which leverages external memory and custom control logic, effectively streamlining data handling challenges. This makes it highly suited for industries requiring precise control over data transformation and real-time data processing. Whether employed in digital communication or image processing, this core offers the computational prowess necessary to maintain efficiency across complex datasets.
TimbreAI T3 addresses audio processing needs by embedding AI in sound-based applications, particularly suitable for power-constrained devices like wireless headsets. It's engineered for exceptional power efficiency, requiring less than 300 µW to operate while maintaining a performance capacity of 3.2 GOPS. This AI inference engine simplifies deployment by never necessitating changes to existing trained models, thus preserving accuracy and efficiency. The TimbreAI T3's architecture ensures that it handles noise reduction seamlessly, offering core audio neural network support. This capability is complemented by its flexible software stack, further reinforcing its strength as a low-power, high-functionality solution for state-of-the-art audio applications.
The Atrevido core from Semidynamics is a 64-bit out-of-order RISC-V processor, engineered for high performance in artificial intelligence and high-performance computing (HPC) environments. Offering extensive customization, it supports 2/3/4-wide design configurations, making it well-suited for handling intricate AI workloads that require significant processing bandwidth. Atrevido is capable of executing multiple operations simultaneously thanks to its Gazzillion Misses™ technology, which can manage up to 128 memory requests concurrently, reducing processing bottlenecks. This core is optimized for applications requiring high data throughput and is compatible with AXI and CHI interfaces, facilitating integration into advanced multiprocessor systems. Additionally, Atrevido comes vector and tensor ready, enabling it to support complex AI tasks, including key-value stores and machine learning. It includes advanced features such as vector extensions and memory interface enhancements, which improve performance in systems that demand robust computational power and flexibility.
The Raptor N3000 AI Accelerator is a robust solution tailored for AI tasks, designed to optimize efficiency and speed for AI inferencing applications. Its architecture allows for accelerated computation, reducing the time needed for complex AI model processing. Equipped with ASIC technology tuned for high-performance inference, the Raptor N3000 delivers uncompromising accuracy and efficiency. It's engineered to tackle the demands of modern AI systems, ensuring tasks can be executed with minimal delay and high accuracy. This accelerator is essential for organizations needing reliable and fast AI processing solutions, providing a high-density and power-effective accelerator that seamlessly integrates into existing systems. Through its advanced interfacing capabilities, the Raptor N3000 is an indispensable asset for advancing AI capabilities across varied sectors.
Engineered for top-tier AI applications, the Origin E8 excels in delivering high-caliber neural processing for industries spanning from automotive solutions to complex data center implementations. The E8's design supports singular core performance up to 128 TOPS, while its adaptive architecture allows easy multi-core scalability to exceed PetaOps. This architecture eradicates common performance bottlenecks associated with tiling, delivering robust throughput without unnecessary power or area compromises. With an impressive suite of features, the E8 facilitates remarkable computational capacity, ensuring that even the most intricate AI networks function smoothly. This high-performance capability, combined with its relatively low power usage, positions the E8 as a leader in AI processing technologies where high efficiency and reliability are imperative.
Neuchips' Viper Series Gen AI PCIe Card is engineered for advanced AI applications, aiming to enhance inferencing speed and efficiency. This card is optimized for handling deep learning tasks and is well-suited for scalable AI environments, providing users with a robust solution for deploying AI applications. Its design focuses on achieving high throughput and low latency, ensuring swift execution of AI functions critical to business operations. By shifting significant computational tasks from the central processor, it allows for more efficient resource allocation and improved system performance. The Viper Series is compatible with a broad range of AI models, offering flexibility and ease of integration for modern businesses. Its emphasis on power efficiency and advanced software compatibility makes it a top choice for organizations seeking to leverage AI technology in a power-efficient manner, without compromising on performance.
The Cobalt GNSS Receiver represents a paradigm shift in the design of System-on-Chip (SoC) technologies, particularly in its integration of ultra-low-power GNSS capabilities. Developed in collaboration with CEVA DSP and supported by the European Space Program Agency, Cobalt is engineered for efficiency and precision in resource-constrained environments. Its architecture supports standalone and cloud-assisted positioning using Galileo, GPS, and Beidou constellations, optimizing the balance between power consumption and market reach. One of the distinctive features of Cobalt is its ability to integrate seamlessly into NB-IoT SoCs, providing an easy GNSS option that is cost-effective and resource-efficient. By leveraging shared resources between the GNSS receiver and modem, this solution not only reduces the footprint of the device but also enhances its cost efficiency, making it an attractive option for mass-market applications. Critical sectors such as logistics, agriculture, insurance, and even animal tracking benefit from Cobalt’s ability to maintain high sensitivity and accuracy, while operating at low power consumption. Cobalt’s design incorporates advanced processing techniques that ensure low MIPS and memory requirements, contributing to its small size and low operational costs. This strategic use of technology empowers clients to deploy wide-scale tracking applications with confidence, knowing that their solutions are backed by robust and reliable location tracking capabilities. With its state-of-the-art sensitivity and precision, Cobalt stands as a pivotal element in the evolution of GNSS technology integration into modern IoT systems.
The Origin E2 NPU cores offer a balanced solution for AI inference by optimizing for both power and area without compromising performance. These cores are expertly crafted to save system power in devices such as smartphones and edge nodes. Their design supports a wide variety of networks, including RNNs and CNNs, catering to the dynamic demands of consumer and industrial applications. With customizable performance ranging from 1 to 20 TOPS, they are adept at handling various AI-driven tasks while reducing latency. The E2 architecture is ingeniously configured to enable parallel processing, affording high resource utilization that minimizes memory demands and system overhead. This results in a flexible NPU architecture that serves as a reliable backbone for deploying efficient AI models across different platforms.
Specialty Microcontrollers from Advanced Silicon harness the capabilities of the latest RISC-V architectures for advanced processing tasks. These microcontrollers are particularly suited to applications involving image processing, thanks to built-in coprocessing units that enhance their algorithm execution efficiency. They serve as ideal platforms for sophisticated touch screen interfaces, offering a balance of high performance, reliability, and low power consumption. The integrated features allow for the enhancement of complex user interfaces and their interaction with other system components to improve overall system functionality and user experience.
The Mixed Radix FFT core caters to applications requiring diverse FFT lengths beyond traditional radix-2 implementations. This versatility enables users to execute FFT with different radix combinations, such as radix-3, radix-5, or radix-7, enhancing its adaptability across various transformative needs. As a result, it's a robust solution for critical data processing tasks where standard FFT cores might fall short. The architecture of the Mixed Radix FFT core supports flexible data processing requirements, ensuring compatibility with a wide range of FFT paradigms. This adaptability allows it to be integrated into bespoke systems that require specific FFT configurations, thereby expanding its usefulness in diverse applications. With efficient management of computational resources, it ensures that data transformation maintains speed without sacrificing precision. Focused on complex data transformation tasks, the Mixed Radix FFT core is designed to seamlessly accommodate FFT calculations with varying radix factors. This flexibility is invaluable for applications in advanced digital communications and multimedia processing, where data dynamics necessitate rapid yet accurate computational adjustments. By incorporating these capabilities, the core serves as a pivotal component in sophisticated digital transformation ecosystems.
The RV32EC_P2 processor core by IQonIC Works is a compact RISC-V processor designed for low-power embedded applications. It features a two-stage pipeline architecture ideal for running trusted firmware and offers a base RV32E instruction set. To enhance efficiency, the core supports RVC compressed instructions for reduced code sizes and optionally includes integer multiplication and division functionalities through the 'M' standard extension. Additionally, it accommodates custom instruction set extensions for tasks such as DSP operations, making it versatile for numerous applications. Designed for ASIC and FPGA implementations, the core provides interfaces like AHB-Lite or APB for memory and I/O operations, ensuring comprehensive system integration. Key features include a simple privileged architecture for machine-mode operations and clock gating for reduced power consumption during idle states. Furthermore, the core supports vectorized interrupts, enabling fast responses to system signals. The RV32EC_P2 is backed by a full suite of development and simulation tools, including synthesis scripts and firmware development environments based on the GNU tool chain. The Virtual Lab (VLAB) system-level tools offer enhanced support for developing and testing applications in a virtual context, ensuring a seamless development experience from conception to deployment.
The Origin E6 provides a formidable edge in AI processing demands for mobile and AR/VR applications, boasting performance specifications between 16 to 32 TOPS. Tailored to accommodate the latest AI models, the E6 benefits from Expedera's distinct packet-based architecture. This cutting-edge design simplifies parallel processing, which enhances efficiency while concurrently diminishing power and resource consumption. As an NPU, it supports an extensive array of video, audio, and text-based networks, thus delivering consistent performance even under complex specifications. The E6's high utilization rates minimize wastage and amplify throughput, certifying its position as an optimal choice for forward-thinking gadgets requiring potent due to its scalable and adaptable architecture.
FortiPKA-RISC-V is a powerful public key algorithm coprocessor designed to enhance cryptographic operations through streamlined modular multiplication techniques. This IP core offers robust protection against side-channel and fault injection attacks, ensuring high performance by eliminating Montgomery domain transformations. Engineered to maximize efficiency, FortiPKA-RISC-V supports a variety of cryptographic protocols, making it suitable for applications demanding high-speed data processing with minimal latency. Its architecture ensures seamless integration into systems requiring secure key exchanges and digital signature verifications, showcasing versatility across different computational platforms. Additionally, this coprocessor is built with a focus on reducing hardware footprint, making it ideal for space and power-conscious applications such as embedded systems and mobile devices. By aligning with industry-standard cryptographic requirements, FortiPKA-RISC-V provides an effective solution for environments requiring elevated security without compromising on computational speed or area efficiency.
The Load Unload FFT core is crafted to facilitate efficient data handling and transformation processes, essentially managing the input and output operations of FFT-based computations. It is particularly advantageous for applications where large volumes of data must be handled smoothly and without delay. Slightly more flexible compared to traditional FFT designs, this core allows for modification according to specific project requirements, making it an excellent choice for customized signal processing solutions. Designed to optimize data throughput with minimal latency, the Load Unload FFT core supports a variety of operational configurations. This allows it to accommodate different data structures and formats, enhancing its versatility across various digital processing environments. The core's architecture ensures consistent performance, even when integrated into complex systems requiring precise data transformation capabilities. The ability to orchestrate smooth data transitions from input to output is central to the Load Unload FFT core's functionality. By effectively managing these transitions, the core reduces potential bottlenecks in data processing, ensuring that systems operate at peak efficiency. For organizations involved in signal processing, this capability translates to improved productivity and enhanced data accuracy, essential for maintaining competitive advantage.
The Low Power Security Engine is designed by Low Power Futures to provide robust security for IoT devices while maintaining minimal power usage. Focusing on compact and comprehensive security capabilities, it supports elliptic-curve based cryptographic algorithms, including ECDHE (Elliptic-Curve Diffie-Hellman) and ECDSA (Elliptic Curve Digital Signature Algorithm). The design is perfect for ensuring secure operations in constrained environments like RFID systems and embedded SIM cards, offering side-channel and timing attack resistance, thus securing sensitive data in demanding applications like IIoT and connected infrastructure.
The Parallel FFT core exemplifies high-efficiency data processing by executing FFT operations simultaneously across multiple data inputs. This design significantly accelerates data transformation tasks, making it ideal for systems that require quick and reliable FFT computations. It is especially beneficial in scenarios where large data sets must be processed in parallel, such as in telecom systems or real-time analytics platforms. With an architecture optimized for concurrent operations, the Parallel FFT core effectively distributes data processing tasks among various computational paths. This reduces the time and resources needed to achieve desired computational results, allowing for higher bandwidth applications to be realized with greater ease. The core is crafted to adjust to various signal processing requirements, maintaining consistent performance across different use cases. The integration of multiple processing streams within the Parallel FFT core enables the quick transformation of data, effectively supporting applications that demand high throughput and low latency. By leveraging advanced parallel computation techniques, the core ensures that data processing tasks are handled efficiently, supporting real-time decision-making and processing in demanding environments.
Designed for high-performance recommendation systems, the RecAccel N3000 PCIe card enhances AI processing capabilities with its powerful architecture. It's optimized for deep learning recommendation models, providing superior power efficiency and heightened performance even under extensive loads. The platform capitalizes on its advanced inferencing capabilities, tailored for deploying recommendation systems that demand real-time processing and negligible latency. It offers significant reductions in computational lag, ensuring smooth and fast data throughput ideal for AI-driven applications. RecAccel N3000 is built to seamlessly integrate with existing infrastructures, allowing for quick deployment and scaling within data centers. Its balanced performance metrics and superior power management technologies make it indispensable for companies relying on accurate and swift recommendation systems, harnessing AI to drive better business insights and customer engagement.
IQonIC Works' RV32IC_P5 processor core is a high-performance RISC-V solution designed for medium-scale embedded systems requiring advanced processing capabilities and efficient multitasking. Its five-stage pipeline architecture supports complex operations with high-speed processing, catering to applications involving both trusted firmware and user code execution. The core is capable of handling a variety of tasks efficiently due to features like cache memories and privileged machine- and user-modes. This core offers a comprehensive RISC-V RV32I base instruction set and includes optional standard extensions for integer operations (M), user-mode execution (N), and critical section handling (A). It's designed to optimize branch prediction and interrupt response times with configurable buffers and vectorized handling capabilities, which are critical for high-performance applications. Supporting both ASIC and FPGA design flows, the RV32IC_P5 core integrates tightly with memory and I/O interfaces, using AHB-Lite buses for extensive connectivity. The accompanying development environment includes the GNU tool chain and ASTC's VLAB for prototyping and firmware testing, ensuring developers have robust tools for seamless application development and deployment.
The High-Performance FPGA & ASIC Networking Product is engineered to seamlessly integrate into distributed systems that are essential in critical domains. This product leverages advanced hardware-based switch technology, utilizing a 10Gb backbone based on finite state machines. As a result, it guarantees high-efficiency performance metrics crucial for a range of industrial applications. Originally tailored for use in the aerospace sector, this networking product addresses key parameters such as safety certification, extensive cyber resilience, and frugality in power usage and weight. The design negates the need for conventional software-based processing, thus significantly lowering overall system workload and promoting energy efficiency, which is vital for heavily regulated sectors like avionics. Moreover, by incorporating advanced encryption capabilities using AES256 GCM and supporting an extensive array of protocols including Ethernet, AFDX, TSN, and others, the product ensures secure data handling. It also provides adaptable data aggregation and conversion features, making it an ideal match for complex system architectures requiring robust and responsive network functionalities. Exported beyond aerospace, this IP finds relevance across various sectors like automotive, naval, and infrastructure management, aligning with industry-specific needs such as cybersecurity, system integration, and compatibility with legacy systems.
The RecAccel AI Platform brings a blend of high performance and precision, especially engineered for complex AI applications requiring high accuracy. It incorporates advanced neural processing elements that boost both the speed and precision of AI computations. Central to its design is the AI-centric architecture that handles intensive workloads while maintaining energy efficiency. By combining hardware advancements with intelligent software, the platform is capable of supporting diverse AI algorithms that require substantial computational resources. Whether used in data-intensive environments or real-time processing applications, the RecAccel AI Platform ensures reliability and scalability. Its ability to manage significant data flows and execute complex algorithms quickly makes it a cornerstone for businesses working with high-accuracy, data-driven models.
The IEEE Floating Point Multiplier/Adder IP core is engineered for high-performance computational environments where precision and speed are critical. Ideal for applications in digital signal processing, scientific computation, and graphics, this IP core adheres strictly to IEEE standards for floating-point arithmetic. In digital processing, especially when dealing with complex calculations and transformations, the necessity of having precise and compliant floating-point operations cannot be understated. This core's design compensates for such needs by enabling smooth operations for multiplication and addition in floating-point arithmetic. Its efficiency not only speeds up operations but also reduces the resource usage on both FPGAs and ASICs. Its versatile architecture is compatible with a variety of systems and ensures error-free operations essential in fields as varied as computer graphics, scientific research, and financial modeling. By reducing computational time and improving the accuracy of calculations, this IP core plays a substantial role in advancing the performance of processing units where mathematical precision is prioritized.
**DRV64IMZicsr – 64-bit RISC-V Performance. Designed for Demanding Innovation.** The DRV64IMZicsr is a powerful and versatile 64-bit RISC-V CPU core, built to meet the performance and safety needs of next-generation embedded systems. Featuring the M (Multiply/Divide), Zicsr (Control and Status Registers), and External Debug extensions, this core is engineered to scale—from edge computing to mission-critical applications. As part of the DRVX Core Family, the DRV64IMZicsr embodies DCD’s philosophy of combining open-standard freedom with customizable IP excellence—making it a smart and future-proof alternative to legacy architectures. ✅ Why Choose RISC-V? * No license fees – open-source instruction set means reduced TCO * Unmatched flexibility – tailor the architecture to your specific needs * A global, thriving ecosystem – support from toolchains, OSes, and hardware vendors * Security & longevity – open and verifiable architecture ensures trust and sustainability 🚀 DRV64IMZicsr – Core Advantages: * 64-bit RISC-V ISA with M, Zicsr, and Debug support * Five-stage pipeline, Harvard architecture, and efficient branch prediction * Configurable memory size and allocation for program and data spaces Performance optimized: * **Up to 2.38 CoreMark/MHz** * **Up to 1.17 DMIPS/MHz** * Compact footprint starting from just 17.6k gates * Interface options: AXI, AHB, or native * Compatible with Classical CAN, CAN FD, and CAN XL through additional IPs 🛡️ Safety, Compatibility & Flexibility Built In: * Developed as an ISO 26262 Safety Element out of Context (SEooC) * Technology-agnostic – works seamlessly across all FPGA and ASIC vendors * Expandable with DCD’s IP portfolio: DMA, SPI, UART, I²C, CAN, PWM, and more 🔍 Robust Feature Set for Real Applications: * Full 64-bit processing – ideal for performance-intensive, memory-heavy tasks * M extension enables high-speed multiplication/division via dedicated hardware unit * Zicsr extension gives full access to Control and Status Registers, enabling: * Interrupts and exception handling (per RISC-V Privileged Spec) * Performance counters and timers * JTAG-compatible debug interface – compliant with RISC-V Debug Spec (0.13.2 & 1.0.0) 🧪 Ready for Development & Integration: * Comes with a fully automated testbench * Includes a comprehensive suite of validation tests for smooth SoC integration * Supported by industry-standard tools, ensuring a hassle-free dev experience Whether you’re designing for automotive safety, industrial control, IoT gateways, or AI-enabled edge devices, the DRV64IMZicsr gives you the performance, flexibility, and future-readiness of RISC-V—without compromise. 💡 Build smarter, safer systems—on your terms. 📩 Contact us today at info@dcd.pl to start your next RISC-V-powered project.
DRV32IMZicsr – Scalable RISC-V Power. Tailored for Your Project. Ready for the Future. The DRV32IMZicsr is a high-performance, 32-bit RISC-V processor core, equipped with M (Multiply/Divide), Zicsr (Control and Status Registers), and External Debug support. Built as part of DCD’s latest DRVX Core Family, it delivers the full flexibility, openness, and innovation that RISC-V promises—without locking you into proprietary architectures. ✅ Why RISC-V? RISC-V is a rapidly growing open standard for modern computing—backed by a global ecosystem of developers and vendors. It brings: * Freedom from licensing fees and vendor lock-in * Scalability from embedded to high-performance systems * Customizability with standard and custom instruction sets * Strong toolchain & ecosystem support 🚀 DRV32IMZicsr Highlights: * Five-stage pipeline and Harvard architecture for optimized performance * Configurable memory architecture: size and address allocation tailored to your needs Performance metrics: * **Up to 1.15 DMIPS/MHz** * **Up to 2.36 CoreMark/MHz** * Minimal footprint starting from just 14k gates * Flexible interfaces: Choose from AXI, AHB, or native bus options 🛡️ Designed for Safety & Integration: * Developed as an ISO 26262 Safety Element out of Context (SEooC) * Fully technology-agnostic, compatible with all FPGA and ASIC platforms * Seamless integration with DCD’s rich portfolio of IPs: DMA, SPI, UART, PWM, CAN, and more 🔍 Advanced Feature Set: * 32 general-purpose registers * Support for arithmetic, logic, load/store, conditional and unconditional control flow * M extension enables efficient integer multiplication/division * Zicsr extension provides robust interrupt and exception handling, performance counters, and timers * External Debug via JTAG: compliant with RISC-V Debug Specification 0.13.2 and 1.0.0, compatible with all mainstream tools 🧪 Developer-Ready: * Delivered with a fully automated testbench * Includes a comprehensive validation test suite for smooth integration into your SoC flow Whether you're building for automotive, IoT, consumer electronics, or embedded systems, the DRV32IMZicsr offers a future-ready RISC-V solution—highly configurable, performance-optimized, and backed by DCD’s 25 years of experience. Interested? Let’s build the next generation together. 📩 Contact us at info@dcd.pl
Application Specific Instruction-set Processors (ASIPs) from Wasiela are customized processors optimized for specific application needs, providing efficient computational power for modern digital solutions. These processors are designed to excel in tasks requiring significant processing strength coupled with low power consumption, making them ideal for use in embedded systems and consumer electronics where power efficiency is crucial. Wasiela’s ASIPs feature a configurable architecture, allowing for customization that matches specific application requirements. This adaptability not only enhances performance by minimizing unnecessary computational operations but also extends the battery life of devices through efficient power usage. The ASIPs architecture supports diverse applications, offering enhanced performance for signals processing, encryption, and other specialized tasks. Their design framework facilitates easy integration into complex systems, providing developers a robust foundation upon which to build high-performance, custom solutions for ever-evolving technological demands.
The Miscellaneous FEC and DSP IP cores offered by Creonic extend beyond standard modulation and coding, contributing a suite of advanced signal processing components. These are crucial for specialized use cases in high-speed, multi-channel processing, supporting real-time applications smooth and efficiently. Among these offerings, the NCR Processor and DVB-GSE encapsulators are uniquely poised for time synchronization and protocol-specific tasks. Combined with digital downconverters and wideband channelizers, Creonic's miscellaneous cores fill essential gaps in protocol architectures, ensuring comprehensive support for system architectures. The inclusion of IPs like the Ultrafast BCH Decoder and Doppler Channel components underscores the company's commitment to delivering trusted building blocks for complex communication landscapes. For designers aiming to complete their projects with reliable, efficient components, Creonic's miscellaneous DSP cores are significant assets.
The D68000-CPU32 soft core is binary-compatible with the industry standard 68000's CPU32 version of the 32-bit microcontroller. The D68000-CPU32 has a 16-bit data bus and a 24-bit address data bus, and it is code compatible with the 68000's CPU32 (version of MC68020). The D68000-CPU32 includes an improved instruction set, allowing for higher performance than the standard 68000 core, and a built-in DoCD-BDM debugger interface. It is delivered with a fully automated test bench and a set of tests enabling easy package validation during different stages of the SoC design flow. The D68000-CPU32 is technology-agnostic, ensuring compatibility with all FPGA and ASIC vendors.
The BA25 Advanced Application Processor is designed for superior performance with independent functional units allowing out-of-order completion. It achieves 1.51 DMIPS/MHz and can reach operational frequencies up to 800 MHz using a 65 nm LP process, making it one of the most capable processors in Beyond's catalogue. Capable of higher instruction throughput, the BA25 stands out for its unparalleled code density, supporting sophisticated application processes and ensuring optimal performance metric achievement. The processor is especially adept at handling complex operating systems and demanding application requirements. This processor's design supports progressive data processing strategies, making it suitable for applications demanding high computational power without sacrificing efficiency, asserting itself as a premium choice for advanced processing needs.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!