All IPs > Platform Level IP
Platform Level IP is a critical category within the semiconductor IP ecosystem, offering a wide array of solutions that are fundamental to the design and efficiency of semiconductor devices. This category includes various IP blocks and cores tailored for enhancing system-level performance, whether in consumer electronics, automotive systems, or networking applications. Suitable for both embedded control and advanced data processing tasks, Platform Level IP encompasses versatile components necessary for building sophisticated, multicore systems and other complex designs.
Subcategories within Platform Level IP cover a broad spectrum of integration needs:
1. **Multiprocessor/DSP (Digital Signal Processing)**: This includes specialized semiconductor IPs for handling tasks that require multiple processor cores working in tandem. These IPs are essential for applications needing high parallelism and performance, such as media processing, telecommunications, and high-performance computing.
2. **Processor Core Dependent**: These semiconductor IPs are designed to be tightly coupled with specific processor cores, ensuring optimal compatibility and performance. They include enhancements that provide seamless integration with one or more predetermined processor architectures, often used in specific applications like embedded systems or custom computing solutions.
3. **Processor Core Independent**: Unlike core-dependent IPs, these are flexible solutions that can integrate with a wide range of processor cores. This adaptability makes them ideal for designers looking to future-proof their technological investments or who are working with diverse processing environments.
Overall, Platform Level IP offers a robust foundation for developing flexible, efficient, and scalable semiconductor devices, catering to a variety of industries and technological requirements. Whether enhancing existing architectures or pioneering new designs, semiconductor IPs in this category play a pivotal role in the innovation and evolution of electronic devices.
Akida's Neural Processor IP represents a leap in AI architecture design, tailored to provide exceptional energy efficiency and processing speed for an array of edge computing tasks. At its core, the processor mimics the synaptic activity of the human brain, efficiently executing tasks that demand high-speed computation and minimal power usage. This processor is equipped with configurable neural nodes capable of supporting innovative AI frameworks such as convolutional and fully-connected neural network processes. Each node accommodates a range of MAC operations, enhancing scalability from basic to complex deployment requirements. This scalability enables the development of lightweight AI solutions suited for consumer electronics as well as robust systems for industrial use. Onboard features like event-based processing and low-latency data communication significantly decrease the strain on host processors, enabling faster and more autonomous system responses. Akida's versatile functionality and ability to learn on the fly make it a cornerstone for next-generation technology solutions that aim to blend cognitive computing with practical, real-world applications.
The KL730 is a third-generation AI chip that integrates advanced reconfigurable NPU architecture, delivering up to 8 TOPS of computing power. This cutting-edge technology enhances computational efficiency across a range of applications, including CNN and transformer networks, while minimizing DDR bandwidth requirements. The KL730 also boasts enhanced video processing capabilities, supporting 4K 60FPS outputs. With expertise spanning over a decade in ISP technology, the KL730 stands out with its noise reduction, wide dynamic range, fisheye correction, and low-light imaging performance. It caters to markets like intelligent security, autonomous vehicles, video conferencing, and industrial camera systems, among others.
The second-generation Akida platform builds upon the foundation of its predecessor with enhanced computational capabilities and increased flexibility for a broader range of AI and machine learning applications. This version supports 8-bit weights and activations in addition to the flexible 4- and 1-bit operations, making it a versatile solution for high-performance AI tasks. Akida 2 introduces support for programmable activation functions and skip connections, further enhancing the efficiency of neural network operations. These capabilities are particularly advantageous for implementing sophisticated machine learning models that require complex, interconnected processing layers. The platform also features support for Spatio-Temporal and Temporal Event-Based Neural Networks, advancing its application in real-time, on-device AI scenarios. Built as a silicon-proven, fully digital neuromorphic solution, Akida 2 is designed to integrate seamlessly with various microcontrollers and application processors. Its highly configurable architecture offers post-silicon flexibility, making it an ideal choice for developers looking to tailor AI processing to specific application needs. Whether for low-latency video processing, real-time sensor data analysis, or interactive voice recognition, Akida 2 provides a robust platform for next-generation AI developments.
The Metis AIPU PCIe AI Accelerator Card is engineered for developers demanding superior AI performance. With its quad-core Metis AIPU, this card delivers up to 214 TOPS, tackling challenging vision applications with unmatched efficiency. The PCIe card is designed with user-friendly integration in mind, featuring the Voyager SDK software stack that accelerates application deployment. Offering impressive processing speeds, the card supports up to 3,200 FPS for ResNet-50 models, providing a competitive edge for demanding AI tasks. Its design ensures it meets the needs of a wide array of AI applications, allowing for scalability and adaptability in various use cases.
The Akida IP is a groundbreaking neural processor designed to emulate the cognitive functions of the human brain within a compact and energy-efficient architecture. This processor is specifically built for edge computing applications, providing real-time AI processing for vision, audio, and sensor fusion tasks. The scalable neural fabric, ranging from 1 to 128 nodes, features on-chip learning capabilities, allowing devices to adapt and learn from new data with minimal external inputs, enhancing privacy and security by keeping data processing localized. Akida's unique design supports 4-, 2-, and 1-bit weight and activation operations, maximizing computational efficiency while minimizing power consumption. This flexibility in configuration, combined with a fully digital neuromorphic implementation, ensures a cost-effective and predictable design process. Akida is also equipped with event-based acceleration, drastically reducing the demands on the host CPU by facilitating efficient data handling and processing directly within the sensor network. Additionally, Akida's on-chip learning supports incremental learning techniques like one-shot and few-shot learning, making it ideal for applications that require quick adaptation to new data. These features collectively support a broad spectrum of intelligent computing tasks, including object detection and signal processing, all performed at the edge, thus eliminating the need for constant cloud connectivity.
Universal Chiplet Interconnect Express, or UCIe, is a forward-looking interconnect technology that enables high-speed data exchanges between various chiplets. Developed to support a modular approach in chip design, UCIe enhances flexibility and scalability, allowing manufacturers to tailor systems to specific needs by integrating multiple functions into a single package. The architecture of UCIe facilitates seamless data communication, crucial in achieving high-performance levels in integrated circuits. It is designed to support multiple configurations and implementations, ensuring compatibility across different designs and maximizing interoperability. UCIe is pivotal in advancing the chiplet strategy, which is becoming increasingly important as devices require more complex and diverse functionalities. By enabling efficient and quick interchip communication, UCIe supports innovation in the semiconductor field, paving the way for the development of highly efficient and sophisticated systems.
The Yitian 710 Processor is a groundbreaking component in processor technology, designed with cutting-edge architecture to enhance computational efficiency. This processor is tailored for cloud-native environments, offering robust support for high-demand computing tasks. It is engineered to deliver significant improvements in performance, making it an ideal choice for data centers aiming to optimize their processing power and energy efficiency. With its advanced features, the Yitian 710 stands at the forefront of processor innovation, ensuring seamless integration with diverse technology platforms and enhancing the overall computing experience across industries.
MetaTF is BrainChip's premier development tool platform designed to complement its neuromorphic technology solutions. This platform is a comprehensive toolkit that empowers developers to convert and optimize standard machine learning models into formats compatible with BrainChip's Akida technology. One of its key advantages is its ability to adjust models into sparse formats, enhancing processing speed and reducing power consumption. The MetaTF framework provides an intuitive interface for integrating BrainChip’s specialized AI capabilities into existing workflows. It supports streamlined adaptation of models to ensure they are optimized for the unique characteristics of neuromorphic processing. Developers can utilize MetaTF to rapidly iterate and refine AI models, making the deployment process smoother and more efficient. By providing direct access to pre-trained models and tuning mechanisms, MetaTF allows developers to capitalize on the benefits of event-based neural processing with minimal configuration effort. This platform is crucial for advancing the application of machine learning across diverse fields such as IoT devices, healthcare technology, and smart infrastructure.
The Veyron V2 CPU represents Ventana's second-generation RISC-V high-performance processor, designed for cloud, data center, edge, and automotive applications. This processor offers outstanding compute capabilities with its server-class architecture, optimized for handling complex, virtualized, and cloud-native workloads efficiently. The Veyron V2 is available as both IP for custom SoCs and as a complete silicon platform, ensuring flexibility for integration into various technological infrastructures. Emphasizing a modern architectural design, it includes full compliance with RISC-V RVA23 specifications, showcasing features like high Instruction Per Clock (IPC) and power-efficient architectures. Comprising of multiple core clusters, this CPU is capable of delivering superior AI and machine learning performance, significantly boosting throughput and energy efficiency. The Veyron V2's advanced fabric interconnects and extensive cache architecture provide the necessary infrastructure for high-performance applications, ensuring broad market adoption and versatile deployment options.
The Quadric Chimera General Purpose Neural Processing Unit (GPNPU) delivers unparalleled performance for AI workloads, characterized by its ability to handle diverse and complex tasks without requiring separate processors for different operations. Designed to unify AI inference and traditional computing processes, the GPNPU supports matrix, vector, and scalar tasks within a single, cohesive execution pipeline. This design not only simplifies the integration of AI capabilities into system-on-chip (SoC) architectures but also significantly boosts developer productivity by allowing them to focus on optimizing rather than partitioning code. The Chimera GPNPU is highly scalable, supporting a wide range of operations across all market segments, including automotive applications with its ASIL-ready versions. With a performance range from 1 to 864 TOPS, it excels in running the latest AI models, such as vision transformers and large language models, alongside classic network backbones. This flexibility ensures that devices powered by Chimera GPNPU can adapt to advancing AI trends, making them suitable for applications that require both immediate performance and long-term capability. A key feature of the Chimera GPNPU is its fully programmable nature, making it a future-proof solution for deploying cutting-edge AI models. Unlike traditional NPUs that rely on hardwired operations, the Chimera GPNPU uses a software-driven approach with its source RTL form, making it a versatile option for inference in mobile, automotive, and edge computing applications. This programmability allows for easy updating and adaptation to new AI model operators, maximizing the lifespan and relevance of chips that utilize this technology.
Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.
The xcore.ai platform by XMOS is a versatile, high-performance microcontroller designed for the integration of AI, DSP, and real-time I/O processing. Focusing on bringing intelligence to the edge, this platform facilitates the construction of entire DSP systems using software without the need for multiple discrete chips. Its architecture is optimized for low-latency operation, making it suitable for diverse applications from consumer electronics to industrial automation. This platform offers a robust set of features conducive to sophisticated computational tasks, including support for AI workloads and enhanced control logic. The xcore.ai platform streamlines development processes by providing a cohesive environment that blends DSP capabilities with AI processing, enabling developers to realize complex applications with greater efficiency. By doing so, it reduces the complexity typically associated with chip integration in advanced systems. Designed for flexibility, xcore.ai supports a wide array of applications across various markets. Its ability to handle audio, voice, and general-purpose processing makes it an essential building block for smart consumer devices, industrial control systems, and AI-powered solutions. Coupled with comprehensive software support and development tools, the xcore.ai ensures a seamless integration path for developers aiming to push the boundaries of AI-enabled technologies.
The Metis AIPU M.2 Accelerator Module is designed for devices that require high-performance AI inference in a compact form factor. Powered by a quad-core Metis AI Processing Unit (AIPU), this module optimizes power consumption and integration, making it ideal for AI-driven applications. With a dedicated memory of 1 GB DRAM, it enhances the capabilities of vision processing systems, providing significant boosts in performance for devices with Next Generation Form Factor (NGFF) M.2 sockets. Ideal for use in computer vision systems and more, it offers hassle-free integration and evaluation with Axelera's Voyager SDK. This accelerator module is tailored for any application seeking to harness the power of AI processing efficiently. The Metis AIPU M.2 Module streamlines the deployment of AI applications, ensuring high performance with reduced power consumption.
The Talamo Software Development Kit (SDK) is an advanced solution from Innatera designed to expedite the development of neuromorphic AI applications. It integrates seamlessly with PyTorch, providing developers with a familiar environment to build and extend AI models specifically for spiking neural processors. By enhancing the standard PyTorch workflow, Talamo simplifies the complexity associated with constructing spiking neural networks, allowing a broader range of developers to create sophisticated AI solutions without requiring deep expertise in neuromorphic computing. Talamo's capabilities include automatic mapping of trained models onto Innatera's heterogeneous computing architecture, coupled with a robust architecture simulator for efficient validation and iteration. This means developers can iterate quickly and efficiently, optimizing their applications for performance and power without extensive upfront reconfiguration or capital layout. The SDK supports the creation of collaborative application pipelines that merge signal processing with AI, supporting custom functions and neural network implementation. This gives developers the flexibility to tailor solutions to specific needs, be it in audio processing, gesture recognition, or environmental sensing. Through its comprehensive toolkit, Talamo SDK empowers users to translate conceptual models into high-performing AI applications that leverage the unique processing strengths of spiking neural networks, ultimately lowering barriers to innovation in low-power, edge-based AI.
The aiWare NPU (Neural Processing Unit) by aiMotive is a high-performance hardware solution tailored specifically for automotive AI applications. It is engineered to accelerate inference tasks for autonomous driving systems, ensuring excellent performance across a variety of neural network workloads. aiWare delivers significant flexibility and efficiency, capable of scaling from basic Level 2 applications to complex multi-sensor Level 3+ systems. Achieving up to 98% efficiency, aiWare's design focuses on minimizing power utilization while maximizing core performance. It supports a broad spectrum of neural network architectures, including convolutional neural networks, transformers, and recurrent networks, making it suitable for diverse AI tasks in the automotive sphere. The NPU's architecture allows for minimal external memory access, thanks to its highly efficient dataflow design that capitalizes on on-chip memory caching. With a robust toolkit known as aiWare Studio, engineers can efficiently optimize neural networks without in-depth knowledge of low-level programming, streamlining development and integration efforts. The aiWare hardware is also compatible with V2X communication and advanced driver assistance systems, adapting to various operational needs with great dexterity. Its comprehensive support for automotive safety standards further cements its reputation as a reliable choice for integrating artificial intelligence into next-generation vehicles.
The SAKURA-II is a cutting-edge AI accelerator that combines high performance with low power consumption, designed to efficiently handle multi-billion parameter models for generative AI applications. It is particularly suited for tasks that demand real-time AI inferencing with minimal batch processing, making it ideal for applications devoted to edge environments. With a typical power usage of 8 watts and a compact footprint, the SAKURA-II achieves more than twice the AI compute efficiency of comparable solutions. This AI accelerator supports next-generation applications by providing up to 4x more DRAM bandwidth compared to alternatives, crucial for the processing of complex vision tasks and large language models (LLMs). The hardware offers advanced precision through software-enabled mixed-precision, which achieves near FP32 accuracy, while a unique sparse computing feature optimizes memory usage. Its robust memory architecture backs up to 32GB of DRAM, providing ample capacity for intensive AI workloads. The SAKURA-II's modular design allows it to be used in multiple form factors, addressing the diverse needs of modern computing tasks such as those found in smart cities, autonomous robotics, and smart manufacturing. Its adaptability is further enhanced by runtime configurable data paths, allowing the device to optimize task scheduling and resource allocation dynamically. These features are powered by the Dynamic Neural Accelerator engine, ensuring efficient computation and energy management.
The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.
The KL630 is a pioneering AI chipset featuring Kneron's latest NPU architecture, which is the first to support Int4 precision and transformer networks. This cutting-edge design ensures exceptional compute efficiency with minimal energy consumption, making it ideal for a wide array of applications. With an ARM Cortex A5 CPU at its core, the KL630 excels in computation while maintaining low energy expenditure. This SOC is designed to handle both high and low light conditions optimally and is perfectly suited for use in diverse edge AI devices, from security systems to expansive city and automotive networks.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
Micro Magic's Ultra-Low-Power 64-Bit RISC-V Core is engineered for superior energy efficiency, consuming a mere 10mW while operating at a clock speed of 1GHz. This processor core is designed to excel under low voltage conditions, delivering high performance without compromising on power conservation. It is ideal for applications requiring prolonged battery life or those operating in energy-sensitive environments. This processor integrates Micro Magic's advanced design methodologies, allowing for operation at frequencies up to 5GHz when necessary. The RISC-V Core capabilities are enhanced by solid construction, ensuring reliability and robust performance across various use-cases, making it an adaptable solution for modern electronic designs. With its cutting-edge design, this RISC-V core supports rapid deployment in numerous applications, especially in areas demanding high computational power alongside reduced energy usage. Micro Magic's advanced techniques ensure that this core is not only fast but also supports scalable integration into various systems with ease.
The Hanguang 800 AI Accelerator is a high-performance AI processor developed to meet the complex demands of artificial intelligence workloads. This accelerator is engineered with cutting-edge AI processing capabilities, enabling rapid data analysis and machine learning model inference. Designed for flexibility, the Hanguang 800 delivers superior computation speed and energy efficiency, making it an optimal choice for AI applications in a variety of sectors, from data centers to edge computing. By supporting high-volume data throughput, it enables organizations to achieve significant advantages in speed and efficiency, facilitating the deployment of intelligent solutions.
TTTech's Time-Triggered Ethernet (TTEthernet) is a breakthrough communication technology that combines the reliability of traditional Ethernet with the precision of time-triggered protocols. Designed to meet stringent safety requirements, this IP is fundamental in environments where fail-safe operations are absolute, such as human spaceflight, nuclear facilities, and other high-risk settings. TTEthernet integrates seamlessly with existing Ethernet infrastructure while providing deterministic control over data transmission times, allowing for real-time application support. Its primary advantage lies in supporting triple-redundant networks, which ensures dual fault-tolerance, an essential feature exemplified in its use by NASA's Orion spacecraft. The integrity and precision offered by Time-Triggered Ethernet make it ideal for implementing ECSS Engineering standards in space applications. It not only permits robust redundancy and high bandwidth (exceeding 10 Gbps) but also supports interoperability with various commercial off-the-shelf components, making it a versatile solution for complex network architectures.
The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
The KL520 marks Kneron's foray into the edge AI landscape, offering an impressive combination of size, power efficiency, and performance. Armed with dual ARM Cortex M4 processors, this chip can operate independently or as a co-processor to enable AI functionalities such as smart locks and security monitoring. The KL520 is adept at 3D sensor integration, making it an excellent choice for applications in smart home ecosystems. Its compact design allows devices powered by it to operate on minimal power, such as running on AA batteries for extended periods, showcasing its exceptional power management capabilities.
Eliyan's NuLink Die-to-Die (D2D) PHY products are designed to provide high-performance, low-power connectivity between chips, or 'chiplets,' in a system. Using standard organic laminate packaging, these IP cores maintain power and performance levels that would traditionally require advanced packaging techniques like silicon interposers. This eliminates the need for such technology, allowing cost-effective system design and reducing thermal, test, and production challenges while maintaining performance. Eliyan’s approach enables flexibility, allowing a broad substrate area that supports more chiplets in the package, significantly boosting performance and power metrics. These D2D PHY cores accommodate various industry standards, including UCIe and BoW, providing configurations tailor-made for optimal bump map layout, thus enhancing overall system efficiency.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
Crafted to deliver significant power savings, the Tianqiao-70 is a low-power RISC-V CPU that excels in commercial-grade scenarios. This 64-bit CPU core is primarily designed for applications where power efficiency is critical, such as mobile devices and computationally intensive IoT solutions. The core's architecture is specifically optimized to perform under stringent power budgets without compromising on the processing power needed for complex tasks. It provides an efficient solution for scenarios that demand reliable performance while maintaining a low energy footprint. Through its refined design, the Tianqiao-70 supports a broad spectrum of applications, including personal computing, machine learning, and mobile communications. Its versatility and power-awareness make it a preferred choice for developers focused on sustainable and scalable computing architectures.
The NMP-750 is AiM Future's powerful edge computing accelerator designed specifically for high-performance tasks. With up to 16 TOPS of computational throughput, this accelerator is perfect for automotive, AMRs, UAVs, as well as AR/VR applications. Fitted with up to 16 MB of local memory and featuring RISC-V or Arm Cortex-R/A 32-bit CPUs, it supports diverse data processing requirements crucial for modern technological solutions. The versatility of the NMP-750 is displayed in its ability to manage complex processes such as multi-camera stream processing and spectral efficiency management. It is also an apt choice for applications that require energy management and building automation, demonstrating exceptional potential in smart city and industrial setups. With its robust architecture, the NMP-750 ensures seamless integration into systems that need to handle large data volumes and support high-speed data transmission. This makes it ideal for applications in telecommunications and security where infrastructure resilience is paramount.
The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.
The N Class RISC-V CPU IP from Nuclei is tailored for applications where space efficiency and power conservation are paramount. It features a 32-bit architecture and is highly suited for microcontroller applications within the AIoT realm. The N Class processors are crafted to provide robust processing capabilities while maintaining a minimal footprint, making them ideal candidates for devices that require efficient power management and secure operations. By adhering to the open RISC-V standard, Nuclei ensures that these processors can be seamlessly integrated into various solutions, offering customizable options to fit specific system requirements.
RapidGPT is a next-generation electronic design automation tool powered by AI. Designed for those in the hardware engineering field, it allows for a seamless transition from ideas to physical hardware without the usual complexities of traditional design tools. The interface is highly intuitive, engaging users with natural language interaction to enhance productivity and reduce the time required for design iterations.\n\nEnhancing the entire design process, RapidGPT begins with concept development and guides users through to the final stages of bitstream or GDSII generation. This tool effectively acts as a co-pilot for engineers, allowing them to easily incorporate third-party IPs, making it adaptable for various project requirements. This adaptability is paramount for industries where speed and precision are of the essence.\n\nPrimisAI has integrated novel features such as AutoReviewâ„¢, which provides automated HDL audits; AutoCommentâ„¢, which generates AI-driven comments for HDL files; and AutoDocâ„¢, which helps create comprehensive project documentation effortlessly. These features collectively make RapidGPT not only a design tool but also a comprehensive project management suite.\n\nThe effectiveness of RapidGPT is made evident in its robust support for various design complexities, providing a scalable solution that meets specific user demands from individual developers to large engineering teams seeking enterprise-grade capabilities.
RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.
The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)
The Neural Processing Unit (NPU) offered by OPENEDGES is engineered to accelerate machine learning tasks and AI computations. Designed for integration into advanced processing platforms, this NPU enhances the ability of devices to perform complex neural network computations quickly and efficiently, significantly advancing AI capabilities. This NPU is built to handle both deep learning and inferencing workloads, utilizing highly efficient data management processes. It optimizes the execution of neural network models with acceleration capabilities that reduce power consumption and latency, making it an excellent choice for real-time AI applications. The architecture is flexible and scalable, allowing it to be tailored for specific application needs or hardware constraints. With support for various AI frameworks and models, the OPENEDGES NPU ensures compatibility and smooth integration with existing AI solutions. This allows companies to leverage cutting-edge AI performance without the need for drastic changes to legacy systems, making it a forward-compatible and cost-effective solution for modern AI applications.
The Codasip RISC-V BK Core Series is designed to offer flexible and high-performance core options catering to a wide range of applications, from low-power tasks to intricate computational needs. This series achieves optimal balance in power consumption and processing speed, making it suitable for applications demanding energy efficiency without compromising performance. These cores are fully RISC-V compliant, allowing for easy customizations to suit specific needs by modifying the processor's architecture or instruction set through Codasip Studio. The BK Core Series provides a streamlining process for developing precise computing solutions, ideal for IoT edge devices and sensor controllers where both small area and low power are critical. Moreover, the BK Core Series supports architectural exploration, enabling users to optimize the core design specifically tailored to their applications. This capability ensures that each core delivers the expected power, efficiency, and performance metrics required by modern technological solutions.
The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.
The Digital PreDistortion (DPD) Solution offered by Systems4Silicon is a versatile technology aimed at significantly enhancing the efficiency of RF power amplifiers. This advanced sub-system is scalable and adaptable to both ASIC and FPGA platforms, ensuring broad compatibility across various device vendors. The DPD solution meticulously enhances linearity, crucial for devices operating within multi-standard environments, such as 5G and O-RAN systems.\n\nDesigned to optimize the signal processing in transmission systems, this DPD technology allows for considerable power savings by enabling amplifiers to function more efficiently. Systems4Silicon’s approach ensures that the system can maintain its performance across different transmission bandwidths, which can scale to 1 GHz or higher. This makes it particularly valuable for large-scale and high-frequency applications.\n\nThe DPD technology's implementation is straightforward, providing a field-proven solution that integrates seamlessly with current infrastructures. Its adaptability is not merely limited to the hardware spectrum but extends to accommodate evolving communication standards, ensuring it remains relevant and effective in diverse market scenarios.
The H.264 FPGA Encoder and CODEC micro footprint cores are a compact and fast solution for video compression, specifically tailored for FPGAs. This licensable IP cores support 1080p60 H.264 Baseline encoding with a single core, offering various configurations such as an H.264 Encoder, CODEC, and I-Frame Only Encoder. These cores are renowned for their small size and rapid processing capabilities, designed to be ITAR compliant for secure applications. Developers can customize these cores to achieve desired pixel depths and resolutions, ensuring adaptability to diverse project needs. With an exceptionally low 1ms latency at 1080p30, these cores are acclaimed as industry-leading in terms of both size and performance. They enhance efficiency by providing fast video processing solutions without compromising quality. The cores are particularly useful in applications demanding high-speed and high-resolution video compression in FPGA implementations. These H.264 cores come with the option for a low-cost evaluation license, providing a seamless entry into advanced video processing tasks. With customizable features, they represent a versatile choice for engineers working on applications involving intensive video encoding and decoding needs.
The RISC-V Core IP developed by AheadComputing Inc. stands out in the field of 64-bit application processors. Designed to deliver exceptional per-core performance, this processor is engineered with the highest standards to maximize the Instructions Per Cycle (IPC) efficiency. AheadComputing's RISC-V Core IP is continuously refined to address the growing demands of high-performance computing applications. The innovative architecture of this core allows for seamless execution of complex algorithms while achieving superior speed and efficiency. This design is crucial for applications that require fast data processing and real-time computational capabilities. By integrating advanced power management techniques, the RISC-V Core IP ensures energy efficiency without sacrificing performance, making it suitable for a wide range of electronic devices. Anticipating future computing needs, AheadComputing's RISC-V Core IP incorporates state-of-the-art features that support scalability and adaptability. These features ensure that the IP remains relevant as technology evolves, providing a solid foundation for developing next-generation computing solutions. Overall, it embodies AheadComputing’s commitment to innovation and performance excellence.
Bluespec's Portable RISC-V Cores are designed to bring flexibility and extended functionality to FPGA platforms such as Achronix, Xilinx, Lattice, and Microsemi. They offer support for operating systems like Linux and FreeRTOS, making them versatile for various applications. These cores are accompanied by standard open-source development tools, which facilitate seamless integration and development processes. By utilizing these tools, developers can modify and enhance the cores to suit their specific needs, ensuring a custom fit for their projects. The portable cores are an excellent choice for developers looking to deploy RISC-V architecture across different FPGA platforms without being tied down to proprietary solutions. With Bluespec's focus on open-source, users can experience freedom in innovation and development without sacrificing performance or compatibility.
The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.
The Spiking Neural Processor T1 is designed as a highly efficient microcontroller that integrates neuromorphic intelligence closely with sensors. It employs a unique spiking neural network engine paired with a nimble RISC-V processor core, forming a cohesive unit for advanced data processing. With this setup, the T1 excels in delivering next-gen AI capabilities embedded directly at the sensor, operating within an exceptionally low power consumption range, ideal for battery-dependent and latency-sensitive applications. This processor marks a notable advancement in neuromorphic technology, allowing for real-time pattern recognition with minimal power draw. It supports various interfaces like QSPI, I2C, and UART, fitting into a compact 2.16mm x 3mm package, which facilitates easy integration into diverse electronic devices. Additionally, its architecture is designed to process different neural network models efficiently, from spiking to deep neural networks, providing versatility across applications. The T1 Evaluation Kit furthers this ease of adoption by enabling developers to use the Talamo SDK to create or deploy applications readily. It includes tools for performance profiling and supports numerous common sensors, making it a strong candidate for projects aiming to leverage low-power, intelligent processing capabilities. This innovative chip's ability to manage power efficiency with high-speed pattern processing makes it especially suitable for advanced sensing tasks found in wearables, smart home devices, and more.
The Maverick-2 Intelligent Compute Accelerator (ICA) by Next Silicon represents a transformative leap in high-performance compute architecture. It seamlessly integrates into HPC systems with a pioneering software-defined approach that dynamically optimizes hardware configurations based on real-time application demands. This enables high efficiency and unparalleled performance across diverse workloads including HPC, AI, and other data-intensive applications. Maverick-2 harnesses a 5nm process technology, utilizing HBM3E memory for enhanced data throughput and efficient energy usage.\n\nBuilt with developers in mind, Maverick-2 supports an array of programming languages such as C/C++, FORTRAN, and OpenMP without the necessity for proprietary stacks. This flexibility not only mitigates porting challenges but significantly reduces development time and costs. A distinguishing feature of Maverick-2 is its real-time telemetry capabilities that provide valuable insights into performance metrics, allowing for refined optimizations during execution.\n\nThe architecture supports versatile interfaces such as PCIe Gen 5 and offers configurations that accommodate complex workloads using either single or dual-die setups. Its intelligent algorithms autonomously identify computational bottlenecks to enhance throughput and scalability, thus future-proofing investments as computing demands evolve. Maverick-2's utility spans various sectors including life sciences, energy, and fintech, underlining its adaptability and high-performance capabilities.
ChipJuice is a sophisticated tool designed for reverse engineering of integrated circuits (ICs), which plays a vital role in digital forensics and hardware security assessments. The tool allows users to delve into the internal architecture of digital cores, analyzing and extracting detailed layouts such as netlists and HDL files from electronic images of chips. Aimed at providing comprehensive insights, ChipJuice supports a range of applications from security assessments to technological intelligence and digital IP infringement investigations. Engineered for ease of use, ChipJuice is user-friendly and integrates advanced algorithms enabling high-performance processing on standard developer machines. Its design caters to various IC types—microcontrollers, microprocessors, FPGAs, SoCs—regardless of their architecture, size, or materials (like Aluminum or Copper). ChipJuice's versatility allows users to handle both complex and standard ICs, making it a go-to resource for laboratories, researchers, and governmental entities involved in security evaluations. One standout feature of ChipJuice is the "Automated Standard Cell Research," wherein once a standard cell is identified, its occurrences are automatically cataloged and can be quickly reused for studying other chips. This systematizes the reverse engineering workflow, significantly speeding up the analysis by building upon past examinations. ChipJuice epitomizes Texplained's commitment to simplifying the complexities of hardware exploration, delivering precise and actionable insights into the ICs' security framework.
The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.
GSHARK is part of the TAKUMI line of GPU IPs known for its compact size and ability to richly enhance display graphics in embedded systems. Developed for devices like digital cameras, this IP has demonstrated an extensive record of reliability with over a hundred million units shipped. The proprietary architecture offers exceptional performance with low power usage and minimal CPU demand, enabling high-quality graphics rendering typical of PCs and smartphones.
Ncore Cache Coherent Interconnect is designed to tackle the multifaceted challenges in multicore SoC systems by introducing heterogeneous coherence and efficient cache management. This NoC IP optimizes performance by ensuring high throughput and reliable data transmission across multiple cores, making it indispensable for sophisticated computing tasks. Leveraging advanced cache coherency, Ncore maintains data integrity, crucial for maintaining system stability and efficiency in operations involving heavy computational loads. With its ISO26262 support, it caters to automotive and industrial applications requiring high reliability and safety standards. This interconnect technology pairs well with diverse processor architectures and supports an array of protocols, providing seamless integration into existing systems. It enables a coherent and connected multicore environment, enhancing the performance of high-stakes applications across various industry verticals, from automotive to advanced computing environments.
The NMP-350 is an endpoint accelerator designed to deliver the lowest power and cost efficiency in its class. Ideal for applications such as driver authentication and health monitoring, it excels in automotive, AIoT/sensors, and wearable markets. The NMP-350 offers up to 1 TOPS performance with 1 MB of local memory, and is equipped with a RISC-V or Arm Cortex-M 32-bit CPU. It supports multiple use-cases, providing exceptional value for integrating AI capabilities into various devices. NMP-350's architectural design ensures optimal energy consumption, making it particularly suited to Industry 4.0 applications where predictive maintenance is crucial. Its compact nature allows for seamless integration into systems requiring minimal footprint yet substantial computational power. With support for multiple data inputs through AXI4 interfaces, this accelerator facilitates enhanced machine automation and intelligent data processing. This product is a testament to AiM Future's expertise in creating efficient AI solutions, providing the building blocks for smart devices that need to manage resources effectively. The combination of high performance with low energy requirements makes it a go-to choice for developers in the field of AI-enabled consumer technology.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!