All IPs > Processor > DSP Core
In the realm of semiconductor IP, DSP Cores play a pivotal role in enabling efficient digital signal processing capabilities across a wide range of applications. Short for Digital Signal Processor Cores, these semiconductor IPs are engineered to handle complex mathematical calculations swiftly and accurately, making them ideal for integration into devices requiring intensive signal processing tasks.
DSP Core semiconductor IPs are widely implemented in industries like telecommunications, where they are crucial for modulating and encoding signals in mobile phones and other communication devices. They empower these devices to perform multiple operations simultaneously, including compressing audio, optimizing bandwidth usage, and enhancing data packets for better transmission quality. Additionally, in consumer electronics, DSP Cores are fundamental in audio and video equipment, improving the clarity and quality of sound and visuals users experience.
Moreover, DSP Cores are a linchpin in the design of advanced automotive systems and industrial equipment. In automotive applications, they assist in radar and lidar systems, crucial for autonomous driving features by processing the data needed for real-time environmental assessment. In industrial settings, DSP Cores amplify the performance of control systems by providing precise feedback loops and enhancing overall process automation and efficiency.
Silicon Hub's category for DSP Core semiconductor IPs includes a comprehensive collection of advanced designs tailored to various processing needs. These IPs are designed to integrate seamlessly into a multitude of hardware architectures, offering designers and engineers the flexibility and performance necessary to push the boundaries of technology in their respective fields. Whether for enhancing consumer experiences or driving innovation in industrial and automotive sectors, our DSP Core IPs bring unparalleled processing power to the forefront of digital innovations.
The 1G to 224G SerDes is a versatile serializer/deserializer technology designed to facilitate high-speed data transfers across various interface standards. It caters to stringent speed requirements by supporting a wide range of data rates and signaling schemes, allowing efficient integration into comprehensive communication systems. This SerDes technology excels in delivering reliable, low-latency connections, making it ideal for hyperscale data centers, AI, and 5G networking where fast, efficient data processing is essential. The broad compatibility with numerous industry protocols also ensures seamless interoperability with existing systems. Adapted for scalability, the 1G to 224G SerDes provides design flexibility, encouraging implementation across a variety of demanding environments. Its sophisticated architecture promotes energy efficiency and robust performance, crucial for addressing the ever-growing connectivity demands of modern technology infrastructures.
The Veyron V2 CPU represents Ventana's second-generation RISC-V high-performance processor, designed for cloud, data center, edge, and automotive applications. This processor offers outstanding compute capabilities with its server-class architecture, optimized for handling complex, virtualized, and cloud-native workloads efficiently. The Veyron V2 is available as both IP for custom SoCs and as a complete silicon platform, ensuring flexibility for integration into various technological infrastructures. Emphasizing a modern architectural design, it includes full compliance with RISC-V RVA23 specifications, showcasing features like high Instruction Per Clock (IPC) and power-efficient architectures. Comprising of multiple core clusters, this CPU is capable of delivering superior AI and machine learning performance, significantly boosting throughput and energy efficiency. The Veyron V2's advanced fabric interconnects and extensive cache architecture provide the necessary infrastructure for high-performance applications, ensuring broad market adoption and versatile deployment options.
The Quadric Chimera General Purpose Neural Processing Unit (GPNPU) delivers unparalleled performance for AI workloads, characterized by its ability to handle diverse and complex tasks without requiring separate processors for different operations. Designed to unify AI inference and traditional computing processes, the GPNPU supports matrix, vector, and scalar tasks within a single, cohesive execution pipeline. This design not only simplifies the integration of AI capabilities into system-on-chip (SoC) architectures but also significantly boosts developer productivity by allowing them to focus on optimizing rather than partitioning code. The Chimera GPNPU is highly scalable, supporting a wide range of operations across all market segments, including automotive applications with its ASIL-ready versions. With a performance range from 1 to 864 TOPS, it excels in running the latest AI models, such as vision transformers and large language models, alongside classic network backbones. This flexibility ensures that devices powered by Chimera GPNPU can adapt to advancing AI trends, making them suitable for applications that require both immediate performance and long-term capability. A key feature of the Chimera GPNPU is its fully programmable nature, making it a future-proof solution for deploying cutting-edge AI models. Unlike traditional NPUs that rely on hardwired operations, the Chimera GPNPU uses a software-driven approach with its source RTL form, making it a versatile option for inference in mobile, automotive, and edge computing applications. This programmability allows for easy updating and adaptation to new AI model operators, maximizing the lifespan and relevance of chips that utilize this technology.
Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.
The xcore.ai platform by XMOS is a versatile, high-performance microcontroller designed for the integration of AI, DSP, and real-time I/O processing. Focusing on bringing intelligence to the edge, this platform facilitates the construction of entire DSP systems using software without the need for multiple discrete chips. Its architecture is optimized for low-latency operation, making it suitable for diverse applications from consumer electronics to industrial automation. This platform offers a robust set of features conducive to sophisticated computational tasks, including support for AI workloads and enhanced control logic. The xcore.ai platform streamlines development processes by providing a cohesive environment that blends DSP capabilities with AI processing, enabling developers to realize complex applications with greater efficiency. By doing so, it reduces the complexity typically associated with chip integration in advanced systems. Designed for flexibility, xcore.ai supports a wide array of applications across various markets. Its ability to handle audio, voice, and general-purpose processing makes it an essential building block for smart consumer devices, industrial control systems, and AI-powered solutions. Coupled with comprehensive software support and development tools, the xcore.ai ensures a seamless integration path for developers aiming to push the boundaries of AI-enabled technologies.
The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)
Syntacore's SCR3 microcontroller core is a versatile option for developers looking to harness the power of a 5-stage in-order pipeline. Designed to support both 32-bit and 64-bit symmetric multiprocessing (SMP) configurations, this core is perfectly aligned with the needs of embedded applications requiring moderate power and resource efficiency coupled with enhanced processing capabilities. The architecture is fine-tuned to handle a variety of workloads, ensuring a balance between performance and power usage, making it suitable for sectors such as industrial automation, automotive sensors, and IoT devices. The inclusion of privilege modes, memory protection units (MPUs), and cache systems further enhances its capabilities, particularly in environments where system security and reliability are paramount. Developers will find the SCR3 core to be highly adaptable, fitting seamlessly into designs that need scalability and modularity. Syntacore's comprehensive toolkit, combined with detailed documentation, ensures that system integration is both quick and reliable, providing a robust foundation for varied applications.
The Codasip RISC-V BK Core Series is designed to offer flexible and high-performance core options catering to a wide range of applications, from low-power tasks to intricate computational needs. This series achieves optimal balance in power consumption and processing speed, making it suitable for applications demanding energy efficiency without compromising performance. These cores are fully RISC-V compliant, allowing for easy customizations to suit specific needs by modifying the processor's architecture or instruction set through Codasip Studio. The BK Core Series provides a streamlining process for developing precise computing solutions, ideal for IoT edge devices and sensor controllers where both small area and low power are critical. Moreover, the BK Core Series supports architectural exploration, enabling users to optimize the core design specifically tailored to their applications. This capability ensures that each core delivers the expected power, efficiency, and performance metrics required by modern technological solutions.
The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.
The Spiking Neural Processor T1 is designed as a highly efficient microcontroller that integrates neuromorphic intelligence closely with sensors. It employs a unique spiking neural network engine paired with a nimble RISC-V processor core, forming a cohesive unit for advanced data processing. With this setup, the T1 excels in delivering next-gen AI capabilities embedded directly at the sensor, operating within an exceptionally low power consumption range, ideal for battery-dependent and latency-sensitive applications. This processor marks a notable advancement in neuromorphic technology, allowing for real-time pattern recognition with minimal power draw. It supports various interfaces like QSPI, I2C, and UART, fitting into a compact 2.16mm x 3mm package, which facilitates easy integration into diverse electronic devices. Additionally, its architecture is designed to process different neural network models efficiently, from spiking to deep neural networks, providing versatility across applications. The T1 Evaluation Kit furthers this ease of adoption by enabling developers to use the Talamo SDK to create or deploy applications readily. It includes tools for performance profiling and supports numerous common sensors, making it a strong candidate for projects aiming to leverage low-power, intelligent processing capabilities. This innovative chip's ability to manage power efficiency with high-speed pattern processing makes it especially suitable for advanced sensing tasks found in wearables, smart home devices, and more.
The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.
ISPido on the VIP Board is tailored for Lattice Semiconductors' Video Interface Platform, providing a runtime solution optimized for delivering crisp, balanced images in real-time. This solution offers two primary configurations: automatic deployment for optimal settings instantly upon startup, and a manual, menu-driven interface allowing users to fine-tune settings such as gamma tables and convolution filters. Utilizing the CrossLink VIP Input Bridge with Sony IMX 214 sensors and an ECP5-85 FPGA, it provides HD output in HDMI YCrCb format, ensuring high-quality image resolution and real-time calibration.
The Codasip L-Series DSP Core is tailored for applications that require efficient digital signal processing capabilities. Known for its adaptability and top-notch performance, this core is a prime choice for tasks demanding real-time processing and a high level of computational density. The L-Series embraces the RISC-V open standard with an enhanced design that allows for customizing the instruction set and leveraging unique microarchitectural features. Such adaptability is ideal for applications in industries where digital signal manipulation is critical, such as audio processing, telecommunications, and advanced sensor applications. Users are empowered through Codasip Studio to implement specific enhancements and modifications, aligning the core's capabilities with specialized operational requirements. This core not only promises high-speed processing but also ensures that resource allocation is optimized for each specific digital processing task.
The Veyron V1 is a high-performance RISC-V CPU designed to meet the rigorous demands of modern data centers and compute-intensive applications. This processor is tailored for cloud environments requiring extensive compute capabilities, offering substantial power efficiency while optimizing processing workloads. It provides comprehensive architectural support for virtualization and efficient task management with its robust feature set. Incorporating advanced RISC-V standards, the Veyron V1 ensures compatibility and scalability across a wide range of industries, from enterprise servers to high-performance embedded systems. Its architecture is engineered to offer seamless integration, providing an excellent foundation for robust, scalable computing designs. Equipped with state-of-the-art processing cores and enhanced vector acceleration, the Veyron V1 delivers unmatched throughput and performance management, making it suitable for use in diverse computing environments.
The iCan PicoPopĀ® is a highly compact System on Module (SOM) based on the Zynq UltraScale+ MPSoC from Xilinx, suited for high-performance embedded applications in aerospace. Known for its advanced signal processing capabilities, it is particularly effective in video processing contexts, offering efficient data handling and throughput. Its compact size and performance make it ideal for integration into sophisticated systems where space and performance are critical.
The Universal DSP Library is designed to simplify digital signal processing tasks. It ensures efficient and highly effective operations by offering a comprehensive suite of algorithms and functions tailored for various DSP applications. The library is engineered for optimal performance and can be easily integrated into FPGA-based designs, making it a versatile tool for any digital signal processing needs. The comprehensive nature of the Universal DSP Library simplifies the development of complex signal processing applications. It includes support for key processing techniques and can significantly reduce the time required to implement and test DSP functionalities. By leveraging this library, developers can achieve high efficiency and performance in their digital signal processing tasks, thereby optimizing overall system resources. Moreover, the DSP library is designed to be compatible with a wide range of FPGAs, providing a flexible and scalable solution. This makes it an ideal choice for developers seeking to create innovative solutions across various applications, ensuring that their designs can handle demanding signal processing requirements effectively.
ISPido offers a comprehensive set of IP cores focused on high-resolution image signal processing and tuning across multiple devices and platforms, including CPU, GPU, VPU, FPGA, and ASIC technologies. Its flexibility is a standout feature, accommodating ultra-low power devices as well as systems exceeding 8K resolution. Designed for devices where power efficiency and high-quality image processing are paramount, ISPido adapts to a range of hardware architectures to deliver optimal image quality and processing capabilities. The IP has been widely adopted in various applications, making it a cornerstone for industries requiring advanced image calibration and processing capabilities.
TUNGA is an advanced System on Chip (SoC) leveraging the strengths of Posit arithmetic for accelerated High-Performance Computing (HPC) and Artificial Intelligence (AI) tasks. The TUNGA SoC integrates multiple CRISP-cores, employing Posit as a core technology for real-number calculations. This multi-core RISC-V SoC is uniquely equipped with a fixed-point accumulator known as QUIRE, which allows for extremely precise computations across vectors as long as 2 billion entries. The TUNGA SoC includes programmable FPGA gates for enhancing field-critical functions. These gates are instrumental in speeding up data center services, offloading tasks from the CPU, and advancing AI training and inference efficiency using non-standard data types. TUNGA's architecture is tailored for applications demanding high precision, including cryptography and variable precision computing tasks, facilitating the transition towards next-generation arithmetic. In the computational ecology, TUNGA stands out by offering customizable features and rapid processing capabilities, making it suitable not only for typical data center functions but also for complex, precision-demanding workloads. By capitalizing on Posit arithmetic, TUNGA aims to deliver more efficient and powerful computational performance, reflecting a strategic advancement in handling complex data-oriented processes.
Tachyum's Prodigy Universal Processor marks a significant milestone as it combines the functionalities of Central Processing Units (CPUs), General-Purpose Graphics Processing Units (GPGPUs), and Tensor Processing Units (TPUs) into a single cohesive architecture. This groundbreaking design is tailored to meet the escalating demands of artificial intelligence, high-performance computing, and hyperscale data centers by offering unparalleled performance, energy efficiency, and high utilization rates. The Prodigy processor not only tackles common data center challenges like elevated power consumption and stagnating processor performance but also offers a robust solution to enhance server utilization and reduce the carbon footprint of massive computational installations. Notably, it thrives on a simplified programming model grounded in coherent multiprocessor architecture, thereby enabling seamless execution of an array of AI disciplines like Explainable AI, Bio AI, and deep machine learning within a single hardware platform.
The RFicient chip is designed for the Internet of Things (IoT) applications, famously recognized for its ultra-low-power operations. It aims to innovate the IoT landscape by offering a highly efficient receiver technology that significantly reduces power consumption. This chip supports energy harvesting to ensure sustainable operation and contributes to green IoT development by lessening the dependency on traditional power sources. Functionally, the RFicient chip enhances IoT devices' performance by providing cutting-edge reception capabilities, which allow for the consistent and reliable transmission of data across varied environments. This robustness makes it ideal for applications in industrial IoT settings, including smart cities and agricultural monitoring, where data integrity and longevity are crucial. Technically advanced, the RFicient chip's architecture employs intelligent design strategies that leverage low-latency responses in data processing, making it responsive and adaptable to rapid changes in its operational environment. These characteristics position it as a versatile solution for businesses aiming to deploy IoT networks with minimal environmental footprint and extended operational lifespan.
Trifecta-GPU design offers an exceptional computational power utilizing the NVIDIA RTX A2000 embedded GPU. With a focus on modular test and measurement, and electronic warfare markets, this GPU is capable of delivering 8.3 FP32 TFLOPS compute performance. It is tailored for advanced signal processing and machine learning, making it indispensable for modern, software-defined signal processing applications. This GPU is a part of the COTS PXIe/CPCIe modular family, known for its flexibility and ease of use. The NVIDIA GPU integration means users can expect robust performance for AI inference applications, facilitating quick deployment in various scenarios requiring advanced data processing. Incorporating the latest in graphical performance, the Trifecta-GPU supports a broad range of applications, from high-end computing tasks to graphics-intensive processes. It is particularly beneficial for those needing a reliable and powerful GPU for modular T&M and EW projects.
The SiFive Performance family is dedicated to offering high-throughput, low-power processor solutions, suitable for a wide array of applications from data centers to consumer devices. This family includes a range of 64-bit, out-of-order cores configured with options for vector computations, making it ideal for tasks that demand significant processing power alongside efficiency. Performance cores provide unmatched energy efficiency while accommodating a breadth of workload requirements. Their architecture supports up to six-wide out-of-order processing with tailored options that include multiple vector engines. These cores are designed for flexibility, enabling various implementations in consumer electronics, network storage solutions, and complex multimedia processing. The SiFive Performance family facilitates a mix of high performance and low power usage, allowing users to balance the computational needs with power consumption effectively. It stands as a testament to SiFiveās dedication to enabling flexible tech solutions by offering rigorous processing capabilities in compact, scalable packages.
The NX Class RISC-V CPU IP by Nuclei is characterized by its 64-bit architecture, making it a robust choice for storage, AR/VR, and AI applications. This processing unit is designed to accommodate high data throughput and demanding computational tasks. By leveraging advanced capabilities, such as virtual memory and enhanced processing power, the NX Class facilitates cutting-edge technological applications and is adaptable for integration into a vast array of high-performance systems.
Engineered for a dynamic performance footprint, the SCR4 microcontroller core offers a significant advantage with its 5-stage in-order pipeline and specialized floating-point unit (FPU). This characteristic makes it ideal for applications demanding precise computational accuracy and speed, such as control systems, network devices, and automotive technologies. Leveraging 32/64-bit capability, the SCR4 core supports symmetric multiprocessing (SMP) with the added benefit of privilege modes and a comprehensive memory architecture, which includes both L1 and L2 caches. These features make it particularly attractive for developers seeking a core that enables high data throughput while maintaining a focus on power efficiency and area optimization. Syntacore has positioned the SCR4 as a go-to core for projects requiring both power and precision, supported by a development environment that is both intuitive and comprehensive. Its applicability across various industrial sectors underscores its versatility and the robustness of the RISC-V architecture that underpins it.
The eSi-3264 stands out with its support for both 32/64-bit operations, including 64-bit fixed and floating-point SIMD (Single Instruction Multiple Data) DSP extensions. Engineered for applications mandating DSP functionality, it does so with minimal silicon footprint. Its comprehensive instruction set includes specialized commands for various tasks, bolstering its practicality across multiple sectors.
The Neural Network Accelerator by Gyrus AI is an advanced compute solution specially optimized for neural network applications. It features native graph processing capabilities that significantly enhance the computational efficiency of AI models. This IP component supports high-speed operations with 30 TOPS/W, offering exceptional performance that significantly reduces the clock cycles typically required by other systems.\n\nMoreover, the architecture is designed to consume 10-20 times less power, benefitting from a low-memory usage configuration. This efficiency is further highlighted by the IPās ability to achieve an 80% utilization rate across various model structures, which translates into significant reductions in die area requirements up to 8 to 10 times smaller than conventional designs.\n\nGyrus AIās Neural Network Accelerator also supports seamless integration of software tools tailored to run neural networks on the platform, making it a practical choice for edge computing applications. It not only supports large-scale AI computations but also minimizes power consumption and space constraints, making it ideal for high-performance environments.
The Cottonpicken DSP Engine is a highly efficient processing solution designed for advanced image and signal processing applications. This engine primarily handles Bayer pattern decoding, transforming raw image data into formats like YUV 4:2:2, YUV 4:2:0, and RGB. It supports programmable delays, offering versatility in managing data processing timelines to meet specific application needs. A standout feature of the Cottonpicken DSP Engine is its ability to support various YUV conversions, including YCrCb and YCoCg, with integrated support for 3x3 and 5x5 filter kernels. This makes it ideal for complex matrix operations, allowing the engine to be cascaded for expanded processing capabilities. Operating at pixel clock speeds of up to 150 MHz, it provides high performance suitable for numerous platform dependencies. The DSP engine is provided as a closed-source netlist within a development package, ensuring secure and controlled deployment. This setup is advantageous for developers seeking a reliable, pre-tested solution for integrating DSP capabilities into their systems without releasing source code. It's especially beneficial in applications where proprietary technology and methods are to be protected, enabling users to leverage powerful DSP functions while safeguarding intellectual property.
Secure Protocol Engines by Secure-IC focus on enhancing security and network processing efficiency for System-on-Chip (SoC) designs. These high-performance IP blocks are engineered to handle intensive security tasks, offloading critical processes from the main CPU to improve overall system efficiency. Designed for seamless integration, these modules cater to various applications requiring stringent security standards. By leveraging cryptographic acceleration, Secure Protocol Engines facilitate rapid processing of secure communications, allowing SoCs to maintain fast response times even under high-demand conditions. The engines provide robust support for a broad range of security protocols and cryptographic functions, ensuring data integrity and confidentiality across communication channels. This ensures that devices remain secure from unauthorized access and data breaches, particularly in environments prone to cyber threats. Secure Protocol Engines are integral to designing resilient systems that need to process large volumes of secure transactions, such as in financial systems or highly regulated industrial applications. Their architecture allows for scalability and adaptability, making them suitable for both existing systems and new developments in the security technology domain.
TimeServo is a sophisticated System Timer IP Core for FPGAs, providing high-resolution timing essential for line-rate independent packet timestamping. Its architecture allows seamless operation without the need for associated host processor interaction, leveraging a flexible PI-DPLL which utilizes an external 1 PPS signal, ensuring time precision and stability across applications. Besides functioning as a standalone timing solution within an FPGA, TimeServo offers multi-output capabilities with up to 32 independent time domains. Each time output can be individually configured, supporting multiple timing formats, including Binary 48.32 and IEEE standards, which offer great flexibility for timing-sensitive applications. TimeServo uniquely combines software control via an AXI interface with an internal, logically-heavy phase accumulator and Digital Phase Locked Loop mechanisms, achieving impressive jitter performance. Consequently, TimeServo serves as an unparalleled solution for network operators and developers requiring precise timing and synchronization in their systems.
XMOS's xcore-200 is an advanced processor that excels in delivering multichannel audio processing and low-latency performance. Designed to support complex audio and voice processing requirements, it provides developers with the ability to integrate high-quality audio functionalities into their products. The xcore-200's architecture is engineered to allow precise control and processing capabilities, making it ideal for applications in consumer electronics and professional audio equipment. With an emphasis on reducing development time and enhancing product capabilities, the xcore-200 offers an adaptable solution equipped with an array of input and output options to meet diverse processing needs. Its powerful DSP capabilities ensure efficient processing of audio signals, enabling smoother, more reliable audio experiences for end-users. Moreover, the xcore-200 is optimized for power efficiency, supporting a range of applications without significant energy expenditure. The xcore-200 facilitates seamless integration with other technologies, making it a versatile choice for developers who need flexibility in their design process. Whether it's embedded AI functionalities or advanced audio processing demands, the xcore-200 provides a comprehensive platform for building sophisticated digital audio systems. Its capacity to manage multiple processing tasks concurrently ensures that products powered by this processor deliver robust and high-performance outcomes.
The TSP1 is an innovative neural network accelerator chip developed by ABR, designed to advance AI capabilities in battery-powered devices. It supports sophisticated applications such as natural voice interfaces and biosignal classification, demonstrating efficient data handling and low power consumption. This chip is engineered to process sensor signals robustly and independently, which enables highly efficient, state-space networks suitable for diverse applications. Benefiting from ABR's pioneering Legendre Memory Unit (LMU) state-space model, the TSP1 represents a new frontier in data processing efficiency, boasting remarkable power savings. This AI chip is tailored for edge computing contexts, proving itself ideal for applications like AR/VR, wearable technology, and smart home setups. With the TSP1, users can expect quick AI inference times, around 20 milliseconds, while maintaining secure on-chip storage and offering interfaces for multi-sensor inputs. The powerful combination of state-space networks and custom-tailored hardware optimization ensures the TSP1 leads in both scalability for large AI models and energy-aware performance for various sectors, including IoT and industrial applications.
ARC Processor IP from Synopsys is engineered to deliver high performance and superior energy efficiency for embedded applications. It comprises a customizable architecture, allowing developers to tailor it for specific application needs while maintaining low power consumption. Ideal for IoT, automotive, and high-performance computing applications, this processor IP emphasizes scalability and flexibility, enabling the creation of sophisticated system designs tailored to unique industry requirements.
The iniDSP core is a powerful 16-bit digital signal processor designed for enhancing signal processing tasks effectively. It is particularly suited for applications demanding intensive arithmetic processing such as audio processing, telecommunications, and radar systems. By offering flexible design integration and high-performance capabilities, iniDSP manages complex calculations with impressive efficiency. The processor's architecture is constructed to support and simplify the implementation of complex signal algorithms, enabling seamless integration into a range of electronic systems. With its proven application across various sectors, iniDSP provides a robust solution for engineers aiming to optimize digital signal processing.
TimbreAI T3 addresses audio processing needs by embedding AI in sound-based applications, particularly suitable for power-constrained devices like wireless headsets. It's engineered for exceptional power efficiency, requiring less than 300 µW to operate while maintaining a performance capacity of 3.2 GOPS. This AI inference engine simplifies deployment by never necessitating changes to existing trained models, thus preserving accuracy and efficiency. The TimbreAI T3's architecture ensures that it handles noise reduction seamlessly, offering core audio neural network support. This capability is complemented by its flexible software stack, further reinforcing its strength as a low-power, high-functionality solution for state-of-the-art audio applications.
Engineered for top-tier AI applications, the Origin E8 excels in delivering high-caliber neural processing for industries spanning from automotive solutions to complex data center implementations. The E8's design supports singular core performance up to 128 TOPS, while its adaptive architecture allows easy multi-core scalability to exceed PetaOps. This architecture eradicates common performance bottlenecks associated with tiling, delivering robust throughput without unnecessary power or area compromises. With an impressive suite of features, the E8 facilitates remarkable computational capacity, ensuring that even the most intricate AI networks function smoothly. This high-performance capability, combined with its relatively low power usage, positions the E8 as a leader in AI processing technologies where high efficiency and reliability are imperative.
Domain-Specific RISC-V Cores from Bluespec provide targeted acceleration for specific application areas by packaging accelerators as software threads. This approach enables developers to achieve systematic hardware acceleration, improving the performance of applications that demand high computational power. These cores are designed to support scalable concurrency, which means they can efficiently handle multiple operations simultaneously, making them ideal for complex scenarios that require high throughput and low latency. The ease of scalability ensures that developers can rapidly adapt their designs to meet evolving demands without extensive redesign. Bluespecās domain-specific cores are well-suited for specialized markets where performance and efficiency can make a significant impact. By providing a robust platform for acceleration, Bluespec empowers developers to create competitive and rapidly deployable solutions.
The Blazar Bandwidth Accelerator Engine is an advanced memory solution that integrates in-memory computing capabilities for high-capacity, low-latency applications. This engine accelerates data processing by incorporating up to 32 RISC cores, significantly boosting data throughput and application performance. The built-in memory offers a capacity of up to 1Gb, effectively supporting high bandwidth and low latency operations critical in modern networking and data center infrastructures. Key features include the ability to perform tasks traditionally reserved for external processing units directly within the memory, reducing data movement and improving system efficiency. By embedding specific in-memory operations such as BURST and RMW functions, the Blazar engine minimizes execution time and interaction with external processors, offering optimal performance in SmartNICs and SmartSwitch applications. This accelerator engine is specifically designed to operate seamlessly with dual-port memory architectures, allowing parallel data access and processing. This feature is crucial for applications requiring high reliability and fast data aggregation, thus supporting sophisticated networking requirements inherent in 5G and advanced computing environments.
The Origin E2 NPU cores offer a balanced solution for AI inference by optimizing for both power and area without compromising performance. These cores are expertly crafted to save system power in devices such as smartphones and edge nodes. Their design supports a wide variety of networks, including RNNs and CNNs, catering to the dynamic demands of consumer and industrial applications. With customizable performance ranging from 1 to 20 TOPS, they are adept at handling various AI-driven tasks while reducing latency. The E2 architecture is ingeniously configured to enable parallel processing, affording high resource utilization that minimizes memory demands and system overhead. This results in a flexible NPU architecture that serves as a reliable backbone for deploying efficient AI models across different platforms.
Catalyst-GPU represents a cost-effective and powerful graphics solution for the PXIe/CPCIe platform. Equipped with NVIDIA Quadro T600 and T1000 GPUs, this module excels in providing enhanced graphics and computing acceleration required by modern signal processing and AI applications. One of the standout features of Catalyst-GPU is its ease of programming and high compute capabilities. It meets the requirements of both Modular Test and Measurement (T&M) and Electronic Warfare (EW) sectors, offering significant performance improvements at reduced operational costs. Built as a part of the Catalyst family, this module allows access to advanced graphics capabilities of NVIDIA technology, paving the way for efficient data processing and accelerated computational tasks. The Catalyst-GPU sets itself apart as a robust choice for users needing reliable high-performance graphics within a modular system framework.
Specialty Microcontrollers from Advanced Silicon harness the capabilities of the latest RISC-V architectures for advanced processing tasks. These microcontrollers are particularly suited to applications involving image processing, thanks to built-in coprocessing units that enhance their algorithm execution efficiency. They serve as ideal platforms for sophisticated touch screen interfaces, offering a balance of high performance, reliability, and low power consumption. The integrated features allow for the enhancement of complex user interfaces and their interaction with other system components to improve overall system functionality and user experience.
The v-MP6000UDX is a versatile visual processing unit designed to power deep learning, computer vision, and video coding needs all through a single, unified architecture. This processor excels at handling high-performance tasks on embedded systems, ensuring efficiency in both power and silicon area utilization. As industries seek to integrate more sophisticated AI-driven capabilities, the v-MP6000UDX stands out by providing a comprehensive solution that runs all forms of embedded computing tasks seamlessly. A significant advantage of the v-MP6000UDX is its ability to manage complex neural networks in real-time, boasting a dynamically programmable nature that surpasses hardwired counterparts in flexibility and longevity. It facilitates the concurrent execution of various computational workflows such as signal and image processing without the traditional need for multiple hardware units, thereby reducing overall system complexity and enhancing power efficiency. The processor's architecture is particularly noteworthy for its scalability, supporting configurations from a minimal core count to over a thousand cores on a single chip. This makes the v-MP6000UDX adaptable for a wide spectrum of applications ranging from low-powered sensors to high-performance computing setups. Its support for multiple software environments and AI frameworks adds an extra layer of versatility, allowing developers to optimize and deploy a broad variety of deep learning models efficiently.
Dillon Engineering's Floating Point Library is designed to offer IEEE 754 compliant floating point arithmetic capabilities for various applications. Available as pre-designed modules, these cores enable efficient execution of complex mathematical operations, providing critical support for scientific computations and digital signal processing where precision is key. The library offers single, double, and custom precision options, catering to diverse computational needs. The inclusion of pipelined arithmetic ensures that operations such as addition, subtraction, multiplication, and division are performed swiftly and with accuracy. This enhances the library's utility across applications that rely heavily on precise numerical computation. The integration ease and adaptability of the Floating Point Library make it an indispensable resource for projects that require high computational accuracy. Its capacity to handle extensive floating-point operations effectively aids in maintaining performance standards across various processor architectures, ensuring that it remains a vital tool for computation-intensive tasks.
The Origin E6 provides a formidable edge in AI processing demands for mobile and AR/VR applications, boasting performance specifications between 16 to 32 TOPS. Tailored to accommodate the latest AI models, the E6 benefits from Expedera's distinct packet-based architecture. This cutting-edge design simplifies parallel processing, which enhances efficiency while concurrently diminishing power and resource consumption. As an NPU, it supports an extensive array of video, audio, and text-based networks, thus delivering consistent performance even under complex specifications. The E6's high utilization rates minimize wastage and amplify throughput, certifying its position as an optimal choice for forward-thinking gadgets requiring potent due to its scalable and adaptable architecture.
IMG DXS GPU is engineered to meet the needs of automotive and industrial applications where functional safety is paramount. Built on efficient PowerVR architecture, it ensures high-performance graphics rendering with a focus on reduced power consumption. The DXS technology supports comprehensive safety suites, catering to ADAS and digital cockpit applications, thereby addressing stringent automotive safety standards.
Designed specifically for the nuanced requirements of AI on-chip applications, the Calibrator for AI-on-Chips fine-tunes AI models to enhance their performance on specific hardware. This calibration tool adjusts the models for optimal execution, ensuring that chip resources are maximized for efficiency and speed. The Calibrator addresses challenges related to power usage and latency, providing tailored adjustments that fine-tune these parameters according to the hardware's unique characteristics. This approach ensures that AI models can be reliably deployed in environments with limited resources or specific operational constraints. Furthermore, the tool offers automated calibration processes to streamline customization tasks, reducing time-to-market and ensuring that AI models maintain high levels of accuracy and capability even as they undergo optimization for different chip architectures. Skymizer's Calibrator for AI-on-Chips is an essential component for developers and engineers looking to deploy AI solutions that require fine-grained control over model performance and resource management, thus securing the best possible outcomes from AI deployments.
Hypr_risc is a radar DSP accelerator enhanced by a RISC-V-based core, delivering high-efficiency computational capabilities for radar applications. Designed to cater to high-speed ADAS processing, it supports multi-radar environments and ensures optimal DSP performance. Its configurability allows it to adapt to various application parameters, balancing power, size, and computational demands across different processor architectures. Ideal for use in sophisticated automotive radar systems, it offers robust processing capabilities for advanced driver assistance.
The Atlas Series from MIPS is a comprehensive suite of compute subsystems tailored for the Physical AI landscape. Combining the three core componentsāSense, Think, and Actāthis series enables seamless real-time processing, decision-making, and actuation for autonomous systems. The Sense component is adept at handling immense data streams for efficient sensor fusion, while the Think component empowers AI inference models for precise edge-based decision processes. Act, the final component, excels in real-time motor control and efficient energy management, completing a robust framework for modern AI applications. Designed to scale and adapt rapidly, the Atlas Series offers unmatched performance across automotive, industrial, and robotics sectors.
Application Specific Instruction-set Processors (ASIPs) from Wasiela are customized processors optimized for specific application needs, providing efficient computational power for modern digital solutions. These processors are designed to excel in tasks requiring significant processing strength coupled with low power consumption, making them ideal for use in embedded systems and consumer electronics where power efficiency is crucial. Wasielaās ASIPs feature a configurable architecture, allowing for customization that matches specific application requirements. This adaptability not only enhances performance by minimizing unnecessary computational operations but also extends the battery life of devices through efficient power usage. The ASIPs architecture supports diverse applications, offering enhanced performance for signals processing, encryption, and other specialized tasks. Their design framework facilitates easy integration into complex systems, providing developers a robust foundation upon which to build high-performance, custom solutions for ever-evolving technological demands.
The Miscellaneous FEC and DSP IP cores offered by Creonic extend beyond standard modulation and coding, contributing a suite of advanced signal processing components. These are crucial for specialized use cases in high-speed, multi-channel processing, supporting real-time applications smooth and efficiently. Among these offerings, the NCR Processor and DVB-GSE encapsulators are uniquely poised for time synchronization and protocol-specific tasks. Combined with digital downconverters and wideband channelizers, Creonic's miscellaneous cores fill essential gaps in protocol architectures, ensuring comprehensive support for system architectures. The inclusion of IPs like the Ultrafast BCH Decoder and Doppler Channel components underscores the company's commitment to delivering trusted building blocks for complex communication landscapes. For designers aiming to complete their projects with reliable, efficient components, Creonic's miscellaneous DSP cores are significant assets.
The Aurora 8B/10B IP Core is a versatile serial protocol core that supports up to 6.6 Gbps per lane, providing an efficient solution for inter-FPGA communication or as an alternative to high-speed serial interfaces like PCI Express. This IP is compatible with various FPGA vendors, ensuring broad interoperability and application flexibility. It offers low latency and reliable data integrity through robust error-checking mechanisms, making it ideal for high-performance applications that require stable, fast data transfer, such as in telecommunications and high-speed computing.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!