Traditional processor architectures are failing to keep up with the exploding compute demands of AI workloads. They are limited by the power-hungry weight-fetch of von Neumann architectures and limitations of transistor and frequency scaling. At-memory computation places compute elements directly next to the memory array, providing reduced power consumption and increased throughput due to the massive parallelism and memory bandwidth provided by the architecture. The efficiency of at-memory computation provides unrivaled compute density for a variety of AI workloads, including vision, natural language processing, and recommendation engines.
This webinar will familiarize you with this new class of non-von Neumann compute designed to meet these AI demands.
Untether AI’s mission is to help companies running AI workloads to run neural networks faster, cooler, and more cost-effectively using at-memory computing.