Is this your business? Claim it to manage your IP and profile
The SAKURA-II is a cutting-edge AI accelerator that combines high performance with low power consumption, designed to efficiently handle multi-billion parameter models for generative AI applications. It is particularly suited for tasks that demand real-time AI inferencing with minimal batch processing, making it ideal for applications devoted to edge environments. With a typical power usage of 8 watts and a compact footprint, the SAKURA-II achieves more than twice the AI compute efficiency of comparable solutions. This AI accelerator supports next-generation applications by providing up to 4x more DRAM bandwidth compared to alternatives, crucial for the processing of complex vision tasks and large language models (LLMs). The hardware offers advanced precision through software-enabled mixed-precision, which achieves near FP32 accuracy, while a unique sparse computing feature optimizes memory usage. Its robust memory architecture backs up to 32GB of DRAM, providing ample capacity for intensive AI workloads. The SAKURA-II's modular design allows it to be used in multiple form factors, addressing the diverse needs of modern computing tasks such as those found in smart cities, autonomous robotics, and smart manufacturing. Its adaptability is further enhanced by runtime configurable data paths, allowing the device to optimize task scheduling and resource allocation dynamically. These features are powered by the Dynamic Neural Accelerator engine, ensuring efficient computation and energy management.
The Dynamic Neural Accelerator II (DNA-II) Architecture by EdgeCortix represents a leap in neural network processing capabilities, designed to yield exceptional parallelism and efficiency. It employs a runtime reconfigurable architecture that allows data paths to be reconfigured on-the-fly, maximizing parallelism and minimizing memory bandwidth usage on-chip. The DNA-II core can power AI applications across both convolutional and transformer networks, making it adaptable for a range of edge applications. Its scalable design, beginning from 1K MACs, facilitates flexible integration into SOC environments, while supporting a variety of target applications. It essentially serves as the powerhouse for the SAKURA-II AI Accelerator, enabling high-performance processing in compact form factors. Through the MERA software stack, DNA-II optimizes how network tasks are ordered and resources are allocated, providing precise scheduling and reducing inefficiencies found in other architectures. Additionally, the DNA-II features efficient energy consumption metrics, critical for edge implementations where performance must be balanced with power constraints.
The MERA Compiler and Software Framework provides a comprehensive platform for deploying neural network models across various systems. Designed with a framework-agnostic approach, MERA allows developers to leverage predefined models from leading libraries such as Hugging Face, facilitating straightforward AI model deployment and integration without needing to dive into chip-level intricacies. Key to MERA's utility is its ability to optimize the deployment of AI inference by enabling deep neural network graph compilation via the Dynamic Neural Accelerator (DNA) architecture. MERA simplifies the process of deploying pre-trained neural networks by handling all aspects, from APIs, code generation, to runtime needs. It is especially adept at managing generative AI applications, giving users the capacity to generate novel content in fields like vision, language, and audio. MERA is compatible with an array of processing architectures, including AMD, Intel, Arm, and RISC-V. This ensures broad applicability and integration into existing infrastructures. Furthermore, it includes native support for popular machine learning frameworks like TensorFlow Lite and ONNX, making it a flexible solution for software developers and data scientists. Its open-source elements allow for easy distribution and collaboration across project teams, enhancing workflow integration and reducing the time-to-market for AI solutions.
EdgeCortix secures series B funding, nearing $100M. Discover their strategic expansion in AI processing for diverse industries. Insights from key investors. Read more
Explore the breakthrough collaboration between EdgeCortix and Renesas with their RUHMI Framework, simplifying AI deployment on MCUs and MPUs. Read more
EdgeCortix to advance the NovaEdge AI chiplet with NEDO funding for energy-efficient edge solutions in AI, set to revolutionize multiple tech sectors. Read more
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!
To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.