Published in PC Hardware

Chipzilla delays production of next gen Xeons

by on30 June 2021


Sapphire Rapids not so rapid after all

Chipzilla has delayed production of its next-generation Xeon Scalable CPUs, code-named Sapphire Rapids, to the first quarter of 2022 and said it would start ramping shipments by at least April of next year.

Sapphire Rapids Xeon SP processor is being etched using the Enhanced SuperFin tweak of Intel’s 10-nanometer manufacturing process. Of course, we have to take Intel’s word for it, as it gets products out as fast as an asthmatic ant with a heavy load of shopping and even Apple manages to make its products look advanced by comparison.

Writing in her bog Lisa Spelman, head of Intel's Xeon and Memory Group tried to put a brave face on an announcement which is just yet another example of Intel falling behind.

She teased the CPU's new microarchitecture and two features that will be new to the Xeon lineup: the next generation of Deep Learning Boost and an acceleration engine called Intel Data Streaming Accelerator.

But Spelman said Intel is delaying Sapphire Rapids, the 10-nanometer successor to the recently launched Ice Lake server processors, because of extra time needed to validate the CPU.

It should stuff up the “Aurora” A21 exascale supercomputer at Argonne National Laboratory, with was support to use the chip and the “Ponte Vecchio” GPU accelerator. It had a delivery date of the end of 2021 but then-CEO Bob Swan told Wall Street that initial shipments of Ponte Vecchio would slip from late 2021 to early 2022.

Spelman said that now both Sapphire Rapids and Ponte Vecchio have slipped and it seems highly unlikely that Argonne will get the core parts of the system this year.

The Sapphire Rapids slippage affects more customers than the Ponte Vecchio slippage does, and now it looks like Ponte Vecchio will beat Sapphire Rapids into the field.

The Sapphire Rapids chip is based on the “Golden Cove” core, which has a new microarchitecture that includes two new accelerators.

The first is called Advanced Matrix Extensions, or AMX, which is likely to be a matrix math overlay on top of the AVX-512 vector engines that pumps up the performance of matrix operations akin to that done by Tensor Core units on Nvidia GPUs as well as matrix overlays for vectors in IBM’s future Power10 chips.

Spelman did not mention much about AMX. She said that on early silicon for Sapphire Rapids, machine learning inference and training workloads were running 2X faster than “Ice Lake” Xeon SP processors.

The other forthcoming feature in the Golden Cove core – at least the variant aimed at servers – is called Data Streaming Accelerator, or DSA. It was designed for various kinds of high-performance workloads to boost the performance of streaming data movement and the transformations operations for streaming data in storage, networking, and analytics workloads.

“Demand for Sapphire Rapids continues to grow as customers learn more about the benefits of the platform. Given the breadth of enhancements in Sapphire Rapids, we are incorporating additional validation time prior to the production release, which will streamline the deployment process for our customers and partners. Based on this, we now expect Sapphire Rapids to be in production in the first quarter of 2022, with ramp beginning in the second quarter of 2022", Spelman said. 

Last modified on 30 June 2021
Rate this item
(0 votes)

Read more about: