Log In

All IPs > Platform Level IP > Processor Core Independent

Innovative Processor Core Independent Semiconductor IP

In the ever-evolving landscape of semiconductor technologies, processor core independent IPs play a crucial role in designing flexible and scalable digital systems. These semiconductor technologies offer the versatility of enabling functionalities independent of a specific processor core, making them invaluable for a variety of applications where flexibility and reusability are paramount.

Processor core independent semiconductor IPs are tailored to function across different processor architectures, avoiding the constraints tied to any one specific core. This characteristic is particularly beneficial in embedded systems, where designers aim to balance cost, performance, and power efficiency while ensuring seamless integration. These IPs provide solutions that accommodate diverse processing requirements, from small-scale embedded controllers to large-scale data centers, making them essential components in the toolkit of semiconductor design engineers.

Products in this category often include memory controllers, I/O interfaces, and various digital signal processing blocks, each designed to operate autonomously from the central processor's architecture. This independence allows manufacturers to leverage these IPs in a broad array of devices, from consumer electronics to automotive systems, without the need for extensive redesigns for different processor families. Moreover, this flexibility championed by processor core independent IPs significantly accelerates the time-to-market for many devices, offering a competitive edge in high-paced industry environments.

Furthermore, the adoption of processor core independent IPs supports the development of customized, application-specific integrated circuits (ASICs) and system-on-chips (SoCs) that require unique configurations, without the overhead of processor-specific dependencies. By embracing these advanced semiconductor IPs, businesses can ensure that their devices are future-proof, scalable, and capable of integrating new functionalities as technologies advance without being hindered by processor-specific limitations. This adaptability positions processor core independent IPs as a vital cog in the machine of modern semiconductor design and innovation.

All semiconductor IP

Akida Neural Processor IP

Akida's Neural Processor IP represents a leap in AI architecture design, tailored to provide exceptional energy efficiency and processing speed for an array of edge computing tasks. At its core, the processor mimics the synaptic activity of the human brain, efficiently executing tasks that demand high-speed computation and minimal power usage. This processor is equipped with configurable neural nodes capable of supporting innovative AI frameworks such as convolutional and fully-connected neural network processes. Each node accommodates a range of MAC operations, enhancing scalability from basic to complex deployment requirements. This scalability enables the development of lightweight AI solutions suited for consumer electronics as well as robust systems for industrial use. Onboard features like event-based processing and low-latency data communication significantly decrease the strain on host processors, enabling faster and more autonomous system responses. Akida's versatile functionality and ability to learn on the fly make it a cornerstone for next-generation technology solutions that aim to blend cognitive computing with practical, real-world applications.

BrainChip
AI Processor, Coprocessor, CPU, Digital Video Broadcast, Network on Chip, Platform Security, Processor Core Independent, Vision Processor
View Details

KL730 AI SoC

The KL730 is a third-generation AI chip that integrates advanced reconfigurable NPU architecture, delivering up to 8 TOPS of computing power. This cutting-edge technology enhances computational efficiency across a range of applications, including CNN and transformer networks, while minimizing DDR bandwidth requirements. The KL730 also boasts enhanced video processing capabilities, supporting 4K 60FPS outputs. With expertise spanning over a decade in ISP technology, the KL730 stands out with its noise reduction, wide dynamic range, fisheye correction, and low-light imaging performance. It caters to markets like intelligent security, autonomous vehicles, video conferencing, and industrial camera systems, among others.

Kneron
TSMC
12nm
16 Categories
View Details

Akida 2nd Generation

The second-generation Akida platform builds upon the foundation of its predecessor with enhanced computational capabilities and increased flexibility for a broader range of AI and machine learning applications. This version supports 8-bit weights and activations in addition to the flexible 4- and 1-bit operations, making it a versatile solution for high-performance AI tasks. Akida 2 introduces support for programmable activation functions and skip connections, further enhancing the efficiency of neural network operations. These capabilities are particularly advantageous for implementing sophisticated machine learning models that require complex, interconnected processing layers. The platform also features support for Spatio-Temporal and Temporal Event-Based Neural Networks, advancing its application in real-time, on-device AI scenarios. Built as a silicon-proven, fully digital neuromorphic solution, Akida 2 is designed to integrate seamlessly with various microcontrollers and application processors. Its highly configurable architecture offers post-silicon flexibility, making it an ideal choice for developers looking to tailor AI processing to specific application needs. Whether for low-latency video processing, real-time sensor data analysis, or interactive voice recognition, Akida 2 provides a robust platform for next-generation AI developments.

BrainChip
11 Categories
View Details

Metis AIPU PCIe AI Accelerator Card

The Metis AIPU PCIe AI Accelerator Card is engineered for developers demanding superior AI performance. With its quad-core Metis AIPU, this card delivers up to 214 TOPS, tackling challenging vision applications with unmatched efficiency. The PCIe card is designed with user-friendly integration in mind, featuring the Voyager SDK software stack that accelerates application deployment. Offering impressive processing speeds, the card supports up to 3,200 FPS for ResNet-50 models, providing a competitive edge for demanding AI tasks. Its design ensures it meets the needs of a wide array of AI applications, allowing for scalability and adaptability in various use cases.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Ethernet, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

Akida IP

The Akida IP is a groundbreaking neural processor designed to emulate the cognitive functions of the human brain within a compact and energy-efficient architecture. This processor is specifically built for edge computing applications, providing real-time AI processing for vision, audio, and sensor fusion tasks. The scalable neural fabric, ranging from 1 to 128 nodes, features on-chip learning capabilities, allowing devices to adapt and learn from new data with minimal external inputs, enhancing privacy and security by keeping data processing localized. Akida's unique design supports 4-, 2-, and 1-bit weight and activation operations, maximizing computational efficiency while minimizing power consumption. This flexibility in configuration, combined with a fully digital neuromorphic implementation, ensures a cost-effective and predictable design process. Akida is also equipped with event-based acceleration, drastically reducing the demands on the host CPU by facilitating efficient data handling and processing directly within the sensor network. Additionally, Akida's on-chip learning supports incremental learning techniques like one-shot and few-shot learning, making it ideal for applications that require quick adaptation to new data. These features collectively support a broad spectrum of intelligent computing tasks, including object detection and signal processing, all performed at the edge, thus eliminating the need for constant cloud connectivity.

BrainChip
AI Processor, Audio Processor, Coprocessor, CPU, Cryptography Cores, GPU, Input/Output Controller, IoT Processor, Platform Security, Processor Core Independent, Vision Processor
View Details

Universal Chiplet Interconnect Express (UCIe)

Universal Chiplet Interconnect Express, or UCIe, is a forward-looking interconnect technology that enables high-speed data exchanges between various chiplets. Developed to support a modular approach in chip design, UCIe enhances flexibility and scalability, allowing manufacturers to tailor systems to specific needs by integrating multiple functions into a single package. The architecture of UCIe facilitates seamless data communication, crucial in achieving high-performance levels in integrated circuits. It is designed to support multiple configurations and implementations, ensuring compatibility across different designs and maximizing interoperability. UCIe is pivotal in advancing the chiplet strategy, which is becoming increasingly important as devices require more complex and diverse functionalities. By enabling efficient and quick interchip communication, UCIe supports innovation in the semiconductor field, paving the way for the development of highly efficient and sophisticated systems.

EXTOLL GmbH
GLOBALFOUNDRIES, Samsung, TSMC, UMC
22nm, 28nm
AMBA AHB / APB/ AXI, D2D, Gen-Z, Multiprocessor / DSP, Network on Chip, Processor Core Independent, USB, V-by-One, VESA
View Details

Yitian 710 Processor

The Yitian 710 Processor is a groundbreaking component in processor technology, designed with cutting-edge architecture to enhance computational efficiency. This processor is tailored for cloud-native environments, offering robust support for high-demand computing tasks. It is engineered to deliver significant improvements in performance, making it an ideal choice for data centers aiming to optimize their processing power and energy efficiency. With its advanced features, the Yitian 710 stands at the forefront of processor innovation, ensuring seamless integration with diverse technology platforms and enhancing the overall computing experience across industries.

T-Head Semiconductor
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

MetaTF

MetaTF is BrainChip's premier development tool platform designed to complement its neuromorphic technology solutions. This platform is a comprehensive toolkit that empowers developers to convert and optimize standard machine learning models into formats compatible with BrainChip's Akida technology. One of its key advantages is its ability to adjust models into sparse formats, enhancing processing speed and reducing power consumption. The MetaTF framework provides an intuitive interface for integrating BrainChip’s specialized AI capabilities into existing workflows. It supports streamlined adaptation of models to ensure they are optimized for the unique characteristics of neuromorphic processing. Developers can utilize MetaTF to rapidly iterate and refine AI models, making the deployment process smoother and more efficient. By providing direct access to pre-trained models and tuning mechanisms, MetaTF allows developers to capitalize on the benefits of event-based neural processing with minimal configuration effort. This platform is crucial for advancing the application of machine learning across diverse fields such as IoT devices, healthcare technology, and smart infrastructure.

BrainChip
AI Processor, Coprocessor, Processor Core Independent, Vision Processor
View Details

Veyron V2 CPU

The Veyron V2 CPU represents Ventana's second-generation RISC-V high-performance processor, designed for cloud, data center, edge, and automotive applications. This processor offers outstanding compute capabilities with its server-class architecture, optimized for handling complex, virtualized, and cloud-native workloads efficiently. The Veyron V2 is available as both IP for custom SoCs and as a complete silicon platform, ensuring flexibility for integration into various technological infrastructures. Emphasizing a modern architectural design, it includes full compliance with RISC-V RVA23 specifications, showcasing features like high Instruction Per Clock (IPC) and power-efficient architectures. Comprising of multiple core clusters, this CPU is capable of delivering superior AI and machine learning performance, significantly boosting throughput and energy efficiency. The Veyron V2's advanced fabric interconnects and extensive cache architecture provide the necessary infrastructure for high-performance applications, ensuring broad market adoption and versatile deployment options.

Ventana Micro Systems
AI Processor, Audio Processor, CPU, DSP Core, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Chimera GPNPU

The Quadric Chimera General Purpose Neural Processing Unit (GPNPU) delivers unparalleled performance for AI workloads, characterized by its ability to handle diverse and complex tasks without requiring separate processors for different operations. Designed to unify AI inference and traditional computing processes, the GPNPU supports matrix, vector, and scalar tasks within a single, cohesive execution pipeline. This design not only simplifies the integration of AI capabilities into system-on-chip (SoC) architectures but also significantly boosts developer productivity by allowing them to focus on optimizing rather than partitioning code. The Chimera GPNPU is highly scalable, supporting a wide range of operations across all market segments, including automotive applications with its ASIL-ready versions. With a performance range from 1 to 864 TOPS, it excels in running the latest AI models, such as vision transformers and large language models, alongside classic network backbones. This flexibility ensures that devices powered by Chimera GPNPU can adapt to advancing AI trends, making them suitable for applications that require both immediate performance and long-term capability. A key feature of the Chimera GPNPU is its fully programmable nature, making it a future-proof solution for deploying cutting-edge AI models. Unlike traditional NPUs that rely on hardwired operations, the Chimera GPNPU uses a software-driven approach with its source RTL form, making it a versatile option for inference in mobile, automotive, and edge computing applications. This programmability allows for easy updating and adaptation to new AI model operators, maximizing the lifespan and relevance of chips that utilize this technology.

Quadric
15 Categories
View Details

xcore.ai

The xcore.ai platform by XMOS is a versatile, high-performance microcontroller designed for the integration of AI, DSP, and real-time I/O processing. Focusing on bringing intelligence to the edge, this platform facilitates the construction of entire DSP systems using software without the need for multiple discrete chips. Its architecture is optimized for low-latency operation, making it suitable for diverse applications from consumer electronics to industrial automation. This platform offers a robust set of features conducive to sophisticated computational tasks, including support for AI workloads and enhanced control logic. The xcore.ai platform streamlines development processes by providing a cohesive environment that blends DSP capabilities with AI processing, enabling developers to realize complex applications with greater efficiency. By doing so, it reduces the complexity typically associated with chip integration in advanced systems. Designed for flexibility, xcore.ai supports a wide array of applications across various markets. Its ability to handle audio, voice, and general-purpose processing makes it an essential building block for smart consumer devices, industrial control systems, and AI-powered solutions. Coupled with comprehensive software support and development tools, the xcore.ai ensures a seamless integration path for developers aiming to push the boundaries of AI-enabled technologies.

XMOS Semiconductor
21 Categories
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module is designed for devices that require high-performance AI inference in a compact form factor. Powered by a quad-core Metis AI Processing Unit (AIPU), this module optimizes power consumption and integration, making it ideal for AI-driven applications. With a dedicated memory of 1 GB DRAM, it enhances the capabilities of vision processing systems, providing significant boosts in performance for devices with Next Generation Form Factor (NGFF) M.2 sockets. Ideal for use in computer vision systems and more, it offers hassle-free integration and evaluation with Axelera's Voyager SDK. This accelerator module is tailored for any application seeking to harness the power of AI processing efficiently. The Metis AIPU M.2 Module streamlines the deployment of AI applications, ensuring high performance with reduced power consumption.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

Talamo SDK

The Talamo Software Development Kit (SDK) is an advanced solution from Innatera designed to expedite the development of neuromorphic AI applications. It integrates seamlessly with PyTorch, providing developers with a familiar environment to build and extend AI models specifically for spiking neural processors. By enhancing the standard PyTorch workflow, Talamo simplifies the complexity associated with constructing spiking neural networks, allowing a broader range of developers to create sophisticated AI solutions without requiring deep expertise in neuromorphic computing. Talamo's capabilities include automatic mapping of trained models onto Innatera's heterogeneous computing architecture, coupled with a robust architecture simulator for efficient validation and iteration. This means developers can iterate quickly and efficiently, optimizing their applications for performance and power without extensive upfront reconfiguration or capital layout. The SDK supports the creation of collaborative application pipelines that merge signal processing with AI, supporting custom functions and neural network implementation. This gives developers the flexibility to tailor solutions to specific needs, be it in audio processing, gesture recognition, or environmental sensing. Through its comprehensive toolkit, Talamo SDK empowers users to translate conceptual models into high-performing AI applications that leverage the unique processing strengths of spiking neural networks, ultimately lowering barriers to innovation in low-power, edge-based AI.

Innatera Nanosystems
AI Processor, CPU, Multiprocessor / DSP, Processor Core Independent, Vision Processor
View Details

aiWare

The aiWare NPU (Neural Processing Unit) by aiMotive is a high-performance hardware solution tailored specifically for automotive AI applications. It is engineered to accelerate inference tasks for autonomous driving systems, ensuring excellent performance across a variety of neural network workloads. aiWare delivers significant flexibility and efficiency, capable of scaling from basic Level 2 applications to complex multi-sensor Level 3+ systems. Achieving up to 98% efficiency, aiWare's design focuses on minimizing power utilization while maximizing core performance. It supports a broad spectrum of neural network architectures, including convolutional neural networks, transformers, and recurrent networks, making it suitable for diverse AI tasks in the automotive sphere. The NPU's architecture allows for minimal external memory access, thanks to its highly efficient dataflow design that capitalizes on on-chip memory caching. With a robust toolkit known as aiWare Studio, engineers can efficiently optimize neural networks without in-depth knowledge of low-level programming, streamlining development and integration efforts. The aiWare hardware is also compatible with V2X communication and advanced driver assistance systems, adapting to various operational needs with great dexterity. Its comprehensive support for automotive safety standards further cements its reputation as a reliable choice for integrating artificial intelligence into next-generation vehicles.

aiMotive
11 Categories
View Details

SAKURA-II AI Accelerator

The SAKURA-II is a cutting-edge AI accelerator that combines high performance with low power consumption, designed to efficiently handle multi-billion parameter models for generative AI applications. It is particularly suited for tasks that demand real-time AI inferencing with minimal batch processing, making it ideal for applications devoted to edge environments. With a typical power usage of 8 watts and a compact footprint, the SAKURA-II achieves more than twice the AI compute efficiency of comparable solutions. This AI accelerator supports next-generation applications by providing up to 4x more DRAM bandwidth compared to alternatives, crucial for the processing of complex vision tasks and large language models (LLMs). The hardware offers advanced precision through software-enabled mixed-precision, which achieves near FP32 accuracy, while a unique sparse computing feature optimizes memory usage. Its robust memory architecture backs up to 32GB of DRAM, providing ample capacity for intensive AI workloads. The SAKURA-II's modular design allows it to be used in multiple form factors, addressing the diverse needs of modern computing tasks such as those found in smart cities, autonomous robotics, and smart manufacturing. Its adaptability is further enhanced by runtime configurable data paths, allowing the device to optimize task scheduling and resource allocation dynamically. These features are powered by the Dynamic Neural Accelerator engine, ensuring efficient computation and energy management.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

KL630 AI SoC

The KL630 is a pioneering AI chipset featuring Kneron's latest NPU architecture, which is the first to support Int4 precision and transformer networks. This cutting-edge design ensures exceptional compute efficiency with minimal energy consumption, making it ideal for a wide array of applications. With an ARM Cortex A5 CPU at its core, the KL630 excels in computation while maintaining low energy expenditure. This SOC is designed to handle both high and low light conditions optimally and is perfectly suited for use in diverse edge AI devices, from security systems to expansive city and automotive networks.

Kneron
TSMC
12nm LP/LP+
ADPCM, AI Processor, Camera Interface, CPU, GPU, Input/Output Controller, Processor Core Independent, USB, VGA, Vision Processor
View Details

Ultra-Low-Power 64-Bit RISC-V Core

Micro Magic's Ultra-Low-Power 64-Bit RISC-V Core is engineered for superior energy efficiency, consuming a mere 10mW while operating at a clock speed of 1GHz. This processor core is designed to excel under low voltage conditions, delivering high performance without compromising on power conservation. It is ideal for applications requiring prolonged battery life or those operating in energy-sensitive environments. This processor integrates Micro Magic's advanced design methodologies, allowing for operation at frequencies up to 5GHz when necessary. The RISC-V Core capabilities are enhanced by solid construction, ensuring reliability and robust performance across various use-cases, making it an adaptable solution for modern electronic designs. With its cutting-edge design, this RISC-V core supports rapid deployment in numerous applications, especially in areas demanding high computational power alongside reduced energy usage. Micro Magic's advanced techniques ensure that this core is not only fast but also supports scalable integration into various systems with ease.

Micro Magic, Inc.
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

Time-Triggered Ethernet

TTTech's Time-Triggered Ethernet (TTEthernet) is a breakthrough communication technology that combines the reliability of traditional Ethernet with the precision of time-triggered protocols. Designed to meet stringent safety requirements, this IP is fundamental in environments where fail-safe operations are absolute, such as human spaceflight, nuclear facilities, and other high-risk settings. TTEthernet integrates seamlessly with existing Ethernet infrastructure while providing deterministic control over data transmission times, allowing for real-time application support. Its primary advantage lies in supporting triple-redundant networks, which ensures dual fault-tolerance, an essential feature exemplified in its use by NASA's Orion spacecraft. The integrity and precision offered by Time-Triggered Ethernet make it ideal for implementing ECSS Engineering standards in space applications. It not only permits robust redundancy and high bandwidth (exceeding 10 Gbps) but also supports interoperability with various commercial off-the-shelf components, making it a versatile solution for complex network architectures.

TTTech Computertechnik AG
Cell / Packet, Ethernet, FlexRay, IEEE1588, LIN, MIL-STD-1553, MIPI, Processor Core Independent, Safe Ethernet
View Details

AX45MP

The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.

Andes Technology
2D / 3D, ADPCM, CPU, IoT Processor, Processor Core Independent, Processor Cores, Vision Processor
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

KL520 AI SoC

The KL520 marks Kneron's foray into the edge AI landscape, offering an impressive combination of size, power efficiency, and performance. Armed with dual ARM Cortex M4 processors, this chip can operate independently or as a co-processor to enable AI functionalities such as smart locks and security monitoring. The KL520 is adept at 3D sensor integration, making it an excellent choice for applications in smart home ecosystems. Its compact design allows devices powered by it to operate on minimal power, such as running on AA batteries for extended periods, showcasing its exceptional power management capabilities.

Kneron
TSMC
65nm
AI Processor, Camera Interface, Clock Generator, CPU, GPU, IoT Processor, MPEG 4, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details

AndesCore Processors

AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.

Andes Technology
CPU, FlexRay, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

NMP-750

The NMP-750 is AiM Future's powerful edge computing accelerator designed specifically for high-performance tasks. With up to 16 TOPS of computational throughput, this accelerator is perfect for automotive, AMRs, UAVs, as well as AR/VR applications. Fitted with up to 16 MB of local memory and featuring RISC-V or Arm Cortex-R/A 32-bit CPUs, it supports diverse data processing requirements crucial for modern technological solutions. The versatility of the NMP-750 is displayed in its ability to manage complex processes such as multi-camera stream processing and spectral efficiency management. It is also an apt choice for applications that require energy management and building automation, demonstrating exceptional potential in smart city and industrial setups. With its robust architecture, the NMP-750 ensures seamless integration into systems that need to handle large data volumes and support high-speed data transmission. This makes it ideal for applications in telecommunications and security where infrastructure resilience is paramount.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details

Azurite Core-hub

The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.

InCore Semiconductors
AI Processor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

RapidGPT - AI-Driven EDA Tool

RapidGPT is a next-generation electronic design automation tool powered by AI. Designed for those in the hardware engineering field, it allows for a seamless transition from ideas to physical hardware without the usual complexities of traditional design tools. The interface is highly intuitive, engaging users with natural language interaction to enhance productivity and reduce the time required for design iterations.\n\nEnhancing the entire design process, RapidGPT begins with concept development and guides users through to the final stages of bitstream or GDSII generation. This tool effectively acts as a co-pilot for engineers, allowing them to easily incorporate third-party IPs, making it adaptable for various project requirements. This adaptability is paramount for industries where speed and precision are of the essence.\n\nPrimisAI has integrated novel features such as AutoReview™, which provides automated HDL audits; AutoComment™, which generates AI-driven comments for HDL files; and AutoDoc™, which helps create comprehensive project documentation effortlessly. These features collectively make RapidGPT not only a design tool but also a comprehensive project management suite.\n\nThe effectiveness of RapidGPT is made evident in its robust support for various design complexities, providing a scalable solution that meets specific user demands from individual developers to large engineering teams seeking enterprise-grade capabilities.

PrimisAI
AMBA AHB / APB/ AXI, CPU, Ethernet, HDLC, Processor Core Independent
View Details

RAIV General Purpose GPU

RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, Wireless Processor
View Details

NPU

The Neural Processing Unit (NPU) offered by OPENEDGES is engineered to accelerate machine learning tasks and AI computations. Designed for integration into advanced processing platforms, this NPU enhances the ability of devices to perform complex neural network computations quickly and efficiently, significantly advancing AI capabilities. This NPU is built to handle both deep learning and inferencing workloads, utilizing highly efficient data management processes. It optimizes the execution of neural network models with acceleration capabilities that reduce power consumption and latency, making it an excellent choice for real-time AI applications. The architecture is flexible and scalable, allowing it to be tailored for specific application needs or hardware constraints. With support for various AI frameworks and models, the OPENEDGES NPU ensures compatibility and smooth integration with existing AI solutions. This allows companies to leverage cutting-edge AI performance without the need for drastic changes to legacy systems, making it a forward-compatible and cost-effective solution for modern AI applications.

OPENEDGES Technology, Inc.
AI Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent
View Details

Codasip RISC-V BK Core Series

The Codasip RISC-V BK Core Series is designed to offer flexible and high-performance core options catering to a wide range of applications, from low-power tasks to intricate computational needs. This series achieves optimal balance in power consumption and processing speed, making it suitable for applications demanding energy efficiency without compromising performance. These cores are fully RISC-V compliant, allowing for easy customizations to suit specific needs by modifying the processor's architecture or instruction set through Codasip Studio. The BK Core Series provides a streamlining process for developing precise computing solutions, ideal for IoT edge devices and sensor controllers where both small area and low power are critical. Moreover, the BK Core Series supports architectural exploration, enabling users to optimize the core design specifically tailored to their applications. This capability ensures that each core delivers the expected power, efficiency, and performance metrics required by modern technological solutions.

Codasip
AI Processor, Building Blocks, CPU, DSP Core, IoT Processor, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

General Purpose Accelerator (Aptos)

The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.

Ascenium
TSMC
10nm, 12nm
CPU, Processor Core Dependent, Processor Core Independent, Processor Cores, Standard cell
View Details

Digital Radio (GDR)

The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.

GIRD Systems, Inc.
3GPP-5G, 3GPP-LTE, 802.11, Coder/Decoder, CPRI, DSP Core, Ethernet, Multiprocessor / DSP, Processor Core Independent
View Details

Maverick-2 Intelligent Compute Accelerator

The Maverick-2 Intelligent Compute Accelerator (ICA) by Next Silicon represents a transformative leap in high-performance compute architecture. It seamlessly integrates into HPC systems with a pioneering software-defined approach that dynamically optimizes hardware configurations based on real-time application demands. This enables high efficiency and unparalleled performance across diverse workloads including HPC, AI, and other data-intensive applications. Maverick-2 harnesses a 5nm process technology, utilizing HBM3E memory for enhanced data throughput and efficient energy usage.\n\nBuilt with developers in mind, Maverick-2 supports an array of programming languages such as C/C++, FORTRAN, and OpenMP without the necessity for proprietary stacks. This flexibility not only mitigates porting challenges but significantly reduces development time and costs. A distinguishing feature of Maverick-2 is its real-time telemetry capabilities that provide valuable insights into performance metrics, allowing for refined optimizations during execution.\n\nThe architecture supports versatile interfaces such as PCIe Gen 5 and offers configurations that accommodate complex workloads using either single or dual-die setups. Its intelligent algorithms autonomously identify computational bottlenecks to enhance throughput and scalability, thus future-proofing investments as computing demands evolve. Maverick-2's utility spans various sectors including life sciences, energy, and fintech, underlining its adaptability and high-performance capabilities.

Next Silicon Ltd.
TSMC
28nm
11 Categories
View Details

ChipJuice

ChipJuice is a sophisticated tool designed for reverse engineering of integrated circuits (ICs), which plays a vital role in digital forensics and hardware security assessments. The tool allows users to delve into the internal architecture of digital cores, analyzing and extracting detailed layouts such as netlists and HDL files from electronic images of chips. Aimed at providing comprehensive insights, ChipJuice supports a range of applications from security assessments to technological intelligence and digital IP infringement investigations. Engineered for ease of use, ChipJuice is user-friendly and integrates advanced algorithms enabling high-performance processing on standard developer machines. Its design caters to various IC types—microcontrollers, microprocessors, FPGAs, SoCs—regardless of their architecture, size, or materials (like Aluminum or Copper). ChipJuice's versatility allows users to handle both complex and standard ICs, making it a go-to resource for laboratories, researchers, and governmental entities involved in security evaluations. One standout feature of ChipJuice is the "Automated Standard Cell Research," wherein once a standard cell is identified, its occurrences are automatically cataloged and can be quickly reused for studying other chips. This systematizes the reverse engineering workflow, significantly speeding up the analysis by building upon past examinations. ChipJuice epitomizes Texplained's commitment to simplifying the complexities of hardware exploration, delivering precise and actionable insights into the ICs' security framework.

Texplained
All Foundries
All Process Nodes
AMBA AHB / APB/ AXI, NVM Express, Processor Core Independent
View Details

GSHARK

GSHARK is part of the TAKUMI line of GPU IPs known for its compact size and ability to richly enhance display graphics in embedded systems. Developed for devices like digital cameras, this IP has demonstrated an extensive record of reliability with over a hundred million units shipped. The proprietary architecture offers exceptional performance with low power usage and minimal CPU demand, enabling high-quality graphics rendering typical of PCs and smartphones.

TAKUMI Corporation
2D / 3D, GPU, Processor Core Independent
View Details

Ncore Cache Coherent Interconnect

Ncore Cache Coherent Interconnect is designed to tackle the multifaceted challenges in multicore SoC systems by introducing heterogeneous coherence and efficient cache management. This NoC IP optimizes performance by ensuring high throughput and reliable data transmission across multiple cores, making it indispensable for sophisticated computing tasks. Leveraging advanced cache coherency, Ncore maintains data integrity, crucial for maintaining system stability and efficiency in operations involving heavy computational loads. With its ISO26262 support, it caters to automotive and industrial applications requiring high reliability and safety standards. This interconnect technology pairs well with diverse processor architectures and supports an array of protocols, providing seamless integration into existing systems. It enables a coherent and connected multicore environment, enhancing the performance of high-stakes applications across various industry verticals, from automotive to advanced computing environments.

Arteris
15 Categories
View Details

SCR9 Processor Core

Syntacore’s SCR9 processor core stands out as a powerful force in handling high-performance computing tasks with its dual-issue out-of-order 12-stage pipeline. This core is engineered for environments that demand peak computational ability and robust pipeline execution, crucial for data-intense tasks such as AI and ML, enterprise applications, and network processing. The architecture is tailored to support extensive multicore and heterogeneous configurations, providing valuable tools for developers aiming to maximize workload efficiency and processing speed. The inclusion of a vector processing unit (VPU) underscores its capability to handle large datasets and complex calculations, while maintaining system integrity and coherence through its comprehensive cache management. With support for hypervisor functionalities and scalable Linux environments, the SCR9 continues to be a key strategic element in expanding the horizons of RISC-V-based applications. Syntacore’s extensive library of development resources further enriches the usability of this core, ensuring that its implementation remains smooth and effective across diverse technological landscapes.

Syntacore
2D / 3D, AI Processor, Coprocessor, CPU, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

AI Inference Platform

Designed to cater to AI-specific needs, SEMIFIVE’s AI Inference Platform provides tailored solutions that seamlessly integrate advanced technologies to optimize performance and efficiency. This platform is engineered to handle the rigorous demands of AI workloads through a well-integrated approach combining hardware and software innovations matched with AI acceleration features. The platform supports scalable AI models, delivering exceptional processing capabilities for tasks involving neural network inference. With a focus on maximizing throughput and efficiency, it facilitates real-time processing and decision-making, which is crucial for applications such as machine learning and data analytics. SEMIFIVE’s platform simplifies AI implementation by providing an extensive suite of development tools and libraries that accelerate design cycles and enhance comprehensive system performance. The incorporation of state-of-the-art caching mechanisms and optimized data flow ensures the platform’s ability to handle large datasets efficiently.

SEMIFIVE
Samsung
5nm, 12nm, 14nm
AI Processor, Cell / Packet, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

SCR7 Application Core

The SCR7 application core is at the forefront of performance and innovation, featuring a 12-stage dual-issue out-of-order pipeline with capabilities that support high-performance, Linux-capable application environments. This core is essential for scenarios demanding seamless cache coherency and support for complex operational tasks. Ideal for high-demand markets such as data centers, artificial intelligence, and mobile technologies, the SCR7 provides a robust and efficient solution that thrives under demanding conditions. It supports 64-bit SMP configurations up to 8 cores, effectively handling multi-threaded operations with superior data throughput capabilities. Syntacore enhances this core’s functionality through its comprehensive ecosystem of tools and support resources, ensuring developers can maximize the capabilities of this formidable hardware. The SCR7 stands as a testament to the scalability and adaptability intrinsic to the RISC-V architecture, reinforced by Syntacore's innovative approach to processor IP design.

Syntacore
AI Processor, CPU, IoT Processor, Microcontroller, Processor Core Independent, Processor Cores
View Details

AndeShape Platforms

The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.

Andes Technology
Embedded Memories, Microcontroller, Processor Core Dependent, Processor Core Independent, Standard cell
View Details

ISPido on VIP Board

ISPido on the VIP Board is tailored for Lattice Semiconductors' Video Interface Platform, providing a runtime solution optimized for delivering crisp, balanced images in real-time. This solution offers two primary configurations: automatic deployment for optimal settings instantly upon startup, and a manual, menu-driven interface allowing users to fine-tune settings such as gamma tables and convolution filters. Utilizing the CrossLink VIP Input Bridge with Sony IMX 214 sensors and an ECP5-85 FPGA, it provides HD output in HDMI YCrCb format, ensuring high-quality image resolution and real-time calibration.

DPControl
19 Categories
View Details

RISCV SoC - Quad Core Server Class

The RISCV SoC developed by Dyumnin Semiconductors is engineered with a 64-bit quad-core server-class RISCV CPU, aiming to bridge various application needs with an integrated, holistic system design. Each subsystem of this SoC, from AI/ML capabilities to automotive and multimedia functionalities, is constructed to deliver optimal performance and streamlined operations. Designed as a reference model, this SoC enables quick adaptation and deployment, significantly reducing the time-to-market for clients. The AI Accelerator subsystem enhances AI operations with its collaboration of a custom central processing unit, intertwined with a specialized tensor flow unit. In the multimedia domain, the SoC boasts integration capabilities for HDMI, Display Port, MIPI, and other advanced graphic and audio technologies, ensuring versatile application across various multimedia requirements. Memory handling is another strength of this SoC, with support for protocols ranging from DDR and MMC to more advanced interfaces like ONFI and SD/SDIO, ensuring seamless connectivity with a wide array of memory modules. Moreover, the communication subsystem encompasses a broad spectrum of connectivity protocols, including PCIe, Ethernet, USB, and SPI, crafting an all-rounded solution for modern communication challenges. The automotive subsystem, offering CAN and CAN-FD protocols, further extends its utility into automotive connectivity.

Dyumnin Semiconductors
28 Categories
View Details

Network on Chip (NOC-X)

Network on Chip X, or NOC-X, is an advanced solution that facilitates communication within a chip by integrating multiple processor cores and IP blocks through a high-performance data transmission network. This IP is specifically crafted to optimize on-chip data flow, ensuring that information can be swiftly and efficiently routed to where it's needed, even in the most demanding computational environments. The NOC-X is built to support a variety of configurations, making it an adaptable choice for different semiconductor designs. It enhances system throughput while maintaining low power consumption, crucial for modern electronic devices requiring both high-speed processing and energy efficiency. By leveraging the capabilities of NOC-X, system designers can achieve superior design flexibility, accelerating the development of complex systems with multiple processing demands. This IP thus plays a role in pushing the boundaries of what’s possible in semiconductor innovation, contributing to the efficiency and performance of future technology solutions.

EXTOLL GmbH
GLOBALFOUNDRIES, Samsung, TSMC, UMC
22nm, 28nm
Network on Chip, Processor Core Independent
View Details

Dynamic Neural Accelerator II Architecture

The Dynamic Neural Accelerator II (DNA-II) Architecture by EdgeCortix represents a leap in neural network processing capabilities, designed to yield exceptional parallelism and efficiency. It employs a runtime reconfigurable architecture that allows data paths to be reconfigured on-the-fly, maximizing parallelism and minimizing memory bandwidth usage on-chip. The DNA-II core can power AI applications across both convolutional and transformer networks, making it adaptable for a range of edge applications. Its scalable design, beginning from 1K MACs, facilitates flexible integration into SOC environments, while supporting a variety of target applications. It essentially serves as the powerhouse for the SAKURA-II AI Accelerator, enabling high-performance processing in compact form factors. Through the MERA software stack, DNA-II optimizes how network tasks are ordered and resources are allocated, providing precise scheduling and reducing inefficiencies found in other architectures. Additionally, the DNA-II features efficient energy consumption metrics, critical for edge implementations where performance must be balanced with power constraints.

EdgeCortix Inc.
AI Processor, Processor Core Independent, Vision Processor, Wireless Processor
View Details

FlexWay Interconnect

FlexWay Interconnect is precisely engineered for cost-effective and low-power applications, particularly suited for Internet-of-Things (IoT) edge devices and microcontrollers. It ensures efficient data management across small to medium scale SoCs. Providing support for ISO26262, it bolsters safety and reliability in critical applications. This interconnect allows for flexible topology generation, enabling configurations that minimize wire lengths and optimize timing closures. Its inherently scalable design allows for incremental upgrades and enhancements, accommodating up to 50 network interface units for customizable connections across configurations. The technology underpinning FlexWay supports key industry protocols such as AXI and APB, making it adaptable to various design requirements. The inclusion of automatic, script-driven topology generation and mesh network editing capabilities means that design complexity is significantly reduced, easing the path from concept to production.

Arteris
AMBA AHB / APB/ AXI, Network on Chip, Processor Core Independent, SATA, VGA, WMV
View Details

Veyron V1 CPU

The Veyron V1 is a high-performance RISC-V CPU designed to meet the rigorous demands of modern data centers and compute-intensive applications. This processor is tailored for cloud environments requiring extensive compute capabilities, offering substantial power efficiency while optimizing processing workloads. It provides comprehensive architectural support for virtualization and efficient task management with its robust feature set. Incorporating advanced RISC-V standards, the Veyron V1 ensures compatibility and scalability across a wide range of industries, from enterprise servers to high-performance embedded systems. Its architecture is engineered to offer seamless integration, providing an excellent foundation for robust, scalable computing designs. Equipped with state-of-the-art processing cores and enhanced vector acceleration, the Veyron V1 delivers unmatched throughput and performance management, making it suitable for use in diverse computing environments.

Ventana Micro Systems
AI Processor, Audio Processor, Coprocessor, CPU, DSP Core, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

ORC3990 – DMSS LEO Satellite Endpoint System On Chip (SoC)

The ORC3990 is a groundbreaking LEO Satellite Endpoint SoC engineered for use in the Totum DMSS Network, offering exceptional sensor-to-satellite connectivity. This SoC operates within the ISM band and features advanced RF transceiver technology, power amplifiers, ARM CPUs, and embedded memory. It boasts a superior link budget that facilitates indoor signal coverage. Designed with advanced power management capabilities, the ORC3990 supports over a decade of battery life, significantly reducing maintenance requirements. Its industrial temperature range of -40 to +85 degrees Celsius ensures stable performance in various environmental conditions. The compact design of the ORC3990 fits seamlessly into any orientation, further enhancing its ease of use. The SoC's innovative architecture eliminates the need for additional GNSS chips, achieving precise location fixes within 20 meters. This capability, combined with its global LEO satellite coverage, makes the ORC3990 a highly attractive solution for asset tracking and other IoT applications where traditional terrestrial networks fall short.

Orca Systems Inc.
Samsung
500nm
3GPP-5G, Bluetooth, Processor Core Independent, RF Modules, USB, W-CDMA, Wireless Processor
View Details

iCan PicoPop® System on Module

The iCan PicoPop® is a highly compact System on Module (SOM) based on the Zynq UltraScale+ MPSoC from Xilinx, suited for high-performance embedded applications in aerospace. Known for its advanced signal processing capabilities, it is particularly effective in video processing contexts, offering efficient data handling and throughput. Its compact size and performance make it ideal for integration into sophisticated systems where space and performance are critical.

OXYTRONIC
12 Categories
View Details

System IP

Ventana's System IP is a critical component for next-generation RISC-V platforms, providing essential support for integrating high-performance CPUs into sophisticated computing architectures. This IP block enables system-level functionality that aligns with the stringent demands of modern computing environments, from cloud infrastructures to advanced automotive systems. Equipped with comprehensive system management capabilities, the System IP includes crucial components such as memory management units and I/O handling protocols that enhance the overall efficiency and reliability of RISC-V-based systems. It is optimized for virtualization and robust security, essential for maintaining integrity in high-traffic data centers. The System IP supports seamless integration with Ventana's Veyron processor families, ensuring scalability and consistent performance under demanding workloads. Its design allows for easy customization, making it an ideal choice for companies looking to innovate and expand within the rapidly evolving field of high-performance computing.

Ventana Micro Systems
AMBA AHB / APB/ AXI, CXL, Embedded Memories, MIPI, Processor Core Independent
View Details

TUNGA

TUNGA is an advanced System on Chip (SoC) leveraging the strengths of Posit arithmetic for accelerated High-Performance Computing (HPC) and Artificial Intelligence (AI) tasks. The TUNGA SoC integrates multiple CRISP-cores, employing Posit as a core technology for real-number calculations. This multi-core RISC-V SoC is uniquely equipped with a fixed-point accumulator known as QUIRE, which allows for extremely precise computations across vectors as long as 2 billion entries. The TUNGA SoC includes programmable FPGA gates for enhancing field-critical functions. These gates are instrumental in speeding up data center services, offloading tasks from the CPU, and advancing AI training and inference efficiency using non-standard data types. TUNGA's architecture is tailored for applications demanding high precision, including cryptography and variable precision computing tasks, facilitating the transition towards next-generation arithmetic. In the computational ecology, TUNGA stands out by offering customizable features and rapid processing capabilities, making it suitable not only for typical data center functions but also for complex, precision-demanding workloads. By capitalizing on Posit arithmetic, TUNGA aims to deliver more efficient and powerful computational performance, reflecting a strategic advancement in handling complex data-oriented processes.

Calligo Technologies
AI Processor, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details

Camera ISP Core

The Camera ISP Core is designed to optimize image signal processing by integrating sophisticated algorithms that produce sharp, high-resolution images while requiring minimal logic. Compatible with RGB Bayer and monochrome image sensors, this core handles inputs from 8 to 14 bits and supports resolutions from 256x256 up to 8192x8192 pixels. Its multi-pixel processing capabilities per clock cycle allow it to achieve performance metrics like 4Kp60 and 4Kp120 on FPGA devices. It uses AXI4-Lite and AXI4-Stream interfaces to streamline defect correction, lens shading correction, and high-quality demosaicing processes. Advanced noise reduction features, both 2D and 3D, are incorporated to handle different lighting conditions effectively. The core also includes sophisticated color and gamma corrections, with HDR processing for combining multiple exposure images to improve dynamic range. Capabilities such as auto focus and saturation, contrast, and brightness control are further enhanced by automatic white balance and exposure adjustments based on RGB histograms and window analyses. Beyond its core features, the Camera ISP Core is available with several configurations including the HDR, Pro, and AI variations, supporting different performance requirements and FPGA platforms. The versatility of the core makes it suitable for a range of applications where high-quality real-time image processing is essential.

ASICFPGA
Samsung, TSMC
16nm, 55nm
2D / 3D, Audio Interfaces, H.263, H.264, Image Conversion, Input/Output Controller, JPEG, Processor Core Independent, Receiver/Transmitter
View Details

Prodigy Universal Processor

Tachyum's Prodigy Universal Processor marks a significant milestone as it combines the functionalities of Central Processing Units (CPUs), General-Purpose Graphics Processing Units (GPGPUs), and Tensor Processing Units (TPUs) into a single cohesive architecture. This groundbreaking design is tailored to meet the escalating demands of artificial intelligence, high-performance computing, and hyperscale data centers by offering unparalleled performance, energy efficiency, and high utilization rates. The Prodigy processor not only tackles common data center challenges like elevated power consumption and stagnating processor performance but also offers a robust solution to enhance server utilization and reduce the carbon footprint of massive computational installations. Notably, it thrives on a simplified programming model grounded in coherent multiprocessor architecture, thereby enabling seamless execution of an array of AI disciplines like Explainable AI, Bio AI, and deep machine learning within a single hardware platform.

Tachyum Inc.
13 Categories
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

No credit card or payment details required.

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt