Cervell is a scalable Neural Processing Unit (NPU) tailored for next-generation machine learning applications. Designed around the RISC-V instruction set architecture, it offers impressive AI computing capabilities ranging from 8 to 64 Trillions of Operations Per Second (TOPS) at INT8. The architecture supports configurations for varied performance needs, allowing seamless scalability for AI tasks, ensuring it's suitable for everything from edge AI implementations to datacenter deployments.
A hallmark of Cervell is its configurable nature, providing options from C8 to C64 settings, to match different AI computational requirements. It supports operations up to 256 TOPS when configured at INT4, ensuring high efficiency at different levels of AI processing. This flexibility makes it ideal for supporting a broad spectrum of applications including large language models, deep learning tools, and AI-driven recommendation systems.
With its unique all-in-one approach, Cervell integrates CPU, Vector, and Tensor processing units to handle complex workloads effortlessly. This integration ensures that heavy AI computations benefit from reduced latency and increased throughput, crucial for demanding scenarios in AI datacenters and edge computing solutions.