SiFive's Intelligence X280 is a multi-core RISC-V design with vector extensions. When combined with the matrix multiplication units (MXU) lifted from Google’s Tensor Processing Units (TPUs) it is supposed to deliver greater flexibility for programming machine-learning workloads.
So, the RV64 cores in the processor run code that manages the device, and feeds machine-learning calculations into Google's MXUs to finish it all off. The X280 also includes its own vector math unit.
SiFive co-founder and chief architect Krste Asanović and Google TPU Architect Cliff Young, wrote in his bog that following the introduction of the X280, some customers started using it as a companion core alongside an accelerator, in order to handle all the housekeeping and general-purpose processing tasks that the accelerator was not designed to perform.
Many found that a full-featured software stack was needed to manage the accelerator, the chip biz says, and customers realized they could solve this with an X280 core complex next to their large accelerator, the RISC-V CPU cores handling all the maintenance and operations code, performing math operations that the big accelerator cannot, and providing various other functions. This means that the X280 can serve as a kind of management node for the accelerator.
Google used this idea to develop what it calls the Vector Coprocessor Interface eXtension (VCIX), which allows customers to tightly link an accelerator directly to the vector register file of the X280, providing increased performance and greater data bandwidth.
According to Asanović, the benefit is that customers can bring their own coprocessor into the RISC-V ecosystem and run a complete software stack and programming environment, with the ability to boot Linux with full virtual memory and cache coherent support, on a chip containing a mix of general-purpose CPU cores and acceleration units.